Skip to content

Instantly share code, notes, and snippets.

@ngoldbaum
Created June 16, 2014 03:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ngoldbaum/1c90e85a377dc77c8acc to your computer and use it in GitHub Desktop.
Save ngoldbaum/1c90e85a377dc77c8acc to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
changeset: 12483:67507b4f8da9
branch: yt-3.0
tag: tip
user: Nathan Goldbaum <goldbaum@ucolick.org>
date: Sun Jun 15 19:50:51 2014 -0700
summary: Replacing "pf" with "ds"
diff -r f20d58ca2848 -r 67507b4f8da9 doc/cheatsheet.tex
--- a/doc/cheatsheet.tex Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/cheatsheet.tex Sun Jun 15 19:50:51 2014 -0700
@@ -208,38 +208,38 @@
After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
that define a region of simulation space from which data should be selected.
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{pf = load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
-\texttt{dd = pf.h.all\_data()} \textemdash\ Select the entire volume.\\
+\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
+\texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
numpy array \texttt{a}. Similarly for other data containers.\\
-\texttt{pf.h.field\_list} \textemdash\ A list of available fields in the snapshot. \\
-\texttt{pf.h.derived\_field\_list} \textemdash\ A list of available derived fields
+\texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
+\texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
in the snapshot. \\
-\texttt{val, loc = pf.h.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
+\texttt{val, loc = ds.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\
-\texttt{sp = pf.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\ Create a spherical data
+\texttt{sp = ds.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\ Create a spherical data
container. {\it cen} may be a coordinate, or ``max'' which
centers on the max density point. {\it radius} may be a float in
code units or a tuple of ({\it length, unit}).\\
-\texttt{re = pf.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
+\texttt{re = ds.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
rectilinear data container. {\it cen} is required but not used.
{\it left} and {\it right edge} are coordinate values that define the region.
-\texttt{di = pf.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\
+\texttt{di = ds.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\
Create a cylindrical data container centered at {\it cen} along the
direction set by {\it normal},with total length
2$\times${\it height} and with radius {\it radius}. \\
- \texttt{bl = pf.boolean({\it constructor})} \textemdash\ Create a boolean data
+ \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
container. {\it constructor} is a list of pre-defined non-boolean
data containers with nested boolean logic using the
``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
{\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
-\texttt{pf.h.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
-\texttt{sp = pf.h.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
+\texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
+\texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
\subsection{Defining New Fields \& Quantities}
@@ -261,15 +261,15 @@
\subsection{Slices and Projections}
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(pf, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with
{\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
\texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
\texttt{.save()} works similarly for the commands below.\\
-\texttt{prj = ProjectionPlot(pf, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
+\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
\subsection{Plot Annotations}
\settowidth{\MyLen}{\texttt{multicol} }
@@ -365,8 +365,8 @@
\subsection{FAQ}
\settowidth{\MyLen}{\texttt{multicol}}
-\texttt{pf.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{pf.h} before this command. \\
+\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
+Must enter \texttt{ds.index} before this command. \\
%\rule{0.3\linewidth}{0.25pt}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/coding_styleguide.txt
--- a/doc/coding_styleguide.txt Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/coding_styleguide.txt Sun Jun 15 19:50:51 2014 -0700
@@ -49,7 +49,7 @@
* Don't create a new class to replicate the functionality of an old class --
replace the old class. Too many options makes for a confusing user
experience.
- * Parameter files are a last resort.
+ * datasets are a last resort.
* The usage of the **kwargs construction should be avoided. If they cannot
be avoided, they must be explained, even if they are only to be passed on to
a nested function.
@@ -61,7 +61,7 @@
* Hard-coding parameter names that are the same as those in Enzo. The
following translation table should be of some help. Note that the
parameters are now properties on a Dataset subclass: you access them
- like pf.refine_by .
+ like ds.refine_by .
* RefineBy => refine_by
* TopGridRank => dimensionality
* TopGridDimensions => domain_dimensions
diff -r f20d58ca2848 -r 67507b4f8da9 doc/docstring_example.txt
--- a/doc/docstring_example.txt Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/docstring_example.txt Sun Jun 15 19:50:51 2014 -0700
@@ -73,7 +73,7 @@
Examples
--------
These are written in doctest format, and should illustrate how to
- use the function. Use the variables 'pf' for the parameter file, 'pc' for
+ use the function. Use the variables 'ds' for the dataset, 'pc' for
a plot collection, 'c' for a center, and 'L' for a vector.
>>> a=[1,2,3]
diff -r f20d58ca2848 -r 67507b4f8da9 doc/docstring_idioms.txt
--- a/doc/docstring_idioms.txt Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/docstring_idioms.txt Sun Jun 15 19:50:51 2014 -0700
@@ -19,7 +19,7 @@
useful variable names that correspond to specific instances that the user is
presupposed to have created.
- * `pf`: a parameter file, loaded successfully
+ * `ds`: a dataset, loaded successfully
* `sp`: a sphere
* `c`: a 3-component "center"
* `L`: a 3-component vector that corresponds to either angular momentum or a
diff -r f20d58ca2848 -r 67507b4f8da9 doc/helper_scripts/parse_cb_list.py
--- a/doc/helper_scripts/parse_cb_list.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/helper_scripts/parse_cb_list.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,7 +2,7 @@
import inspect
from textwrap import TextWrapper
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
output = open("source/visualizing/_cb_docstrings.inc", "w")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/helper_scripts/parse_dq_list.py
--- a/doc/helper_scripts/parse_dq_list.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/helper_scripts/parse_dq_list.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,7 +2,7 @@
import inspect
from textwrap import TextWrapper
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
output = open("source/analyzing/_dq_docstrings.inc", "w")
@@ -29,7 +29,7 @@
docstring = docstring))
#docstring = "\n".join(tw.wrap(docstring))))
-dd = pf.h.all_data()
+dd = ds.all_data()
for n,func in sorted(dd.quantities.functions.items()):
print n, func
write_docstring(output, n, func[1])
diff -r f20d58ca2848 -r 67507b4f8da9 doc/helper_scripts/parse_object_list.py
--- a/doc/helper_scripts/parse_object_list.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/helper_scripts/parse_object_list.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,7 +2,7 @@
import inspect
from textwrap import TextWrapper
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
output = open("source/analyzing/_obj_docstrings.inc", "w")
@@ -27,7 +27,7 @@
f.write(template % dict(clsname = clsname, sig = sig, clsproxy=clsproxy,
docstring = 'physical-object-api'))
-for n,c in sorted(pf.h.__dict__.items()):
+for n,c in sorted(ds.__dict__.items()):
if hasattr(c, '_con_args'):
print n
write_docstring(output, n, c)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/helper_scripts/show_fields.py
--- a/doc/helper_scripts/show_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/helper_scripts/show_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -17,15 +17,15 @@
everywhere, "Enzo" fields in Enzo datasets, "Orion" fields in Orion datasets,
and so on.
-Try using the ``pf.field_list`` and ``pf.derived_field_list`` to view the
+Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the
native and derived fields available for your dataset respectively. For example
to display the native fields in alphabetical order:
.. notebook-cell::
from yt.mods import *
- pf = load("Enzo_64/DD0043/data0043")
- for i in sorted(pf.field_list):
+ ds = load("Enzo_64/DD0043/data0043")
+ for i in sorted(ds.field_list):
print i
.. note:: Universal fields will be overridden by a code-specific field.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/_obj_docstrings.inc
--- a/doc/source/analyzing/_obj_docstrings.inc Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/_obj_docstrings.inc Sun Jun 15 19:50:51 2014 -0700
@@ -1,12 +1,12 @@
-.. class:: boolean(self, regions, fields=None, pf=None, **field_parameters):
+.. class:: boolean(self, regions, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRBooleanRegionBase`.)
-.. class:: covering_grid(self, level, left_edge, dims, fields=None, pf=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
+.. class:: covering_grid(self, level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRCoveringGridBase`.)
@@ -24,13 +24,13 @@
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase`.)
-.. class:: disk(self, center, normal, radius, height, fields=None, pf=None, **field_parameters):
+.. class:: disk(self, center, normal, radius, height, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRCylinderBase`.)
-.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, pf=None, **field_parameters):
+.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMREllipsoidBase`.)
@@ -48,79 +48,79 @@
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResCuttingPlaneBase`.)
-.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, pf=None, **field_parameters):
+.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResProjectionBase`.)
-.. class:: grid_collection(self, center, grid_list, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection(self, center, grid_list, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRGridCollectionBase`.)
-.. class:: grid_collection_max_level(self, center, max_level, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection_max_level(self, center, max_level, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRMaxLevelCollectionBase`.)
-.. class:: inclined_box(self, origin, box_vectors, fields=None, pf=None, **field_parameters):
+.. class:: inclined_box(self, origin, box_vectors, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRInclinedBoxBase`.)
-.. class:: ortho_ray(self, axis, coords, fields=None, pf=None, **field_parameters):
+.. class:: ortho_ray(self, axis, coords, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMROrthoRayBase`.)
-.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
+.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRProjBase`.)
-.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionBase`.)
-.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionStrictBase`.)
-.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
+.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRQuadTreeProjBase`.)
-.. class:: ray(self, start_point, end_point, fields=None, pf=None, **field_parameters):
+.. class:: ray(self, start_point, end_point, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRRayBase`.)
-.. class:: region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionBase`.)
-.. class:: region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionStrictBase`.)
-.. class:: slice(self, axis, coord, fields=None, center=None, pf=None, node_name=False, **field_parameters):
+.. class:: slice(self, axis, coord, fields=None, center=None, ds=None, node_name=False, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRSliceBase`.)
@@ -132,13 +132,13 @@
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRSmoothedCoveringGridBase`.)
-.. class:: sphere(self, center, radius, fields=None, pf=None, **field_parameters):
+.. class:: sphere(self, center, radius, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRSphereBase`.)
-.. class:: streamline(self, positions, length=1.0, fields=None, pf=None, **field_parameters):
+.. class:: streamline(self, positions, length=1.0, fields=None, ds=None, **field_parameters):
For more information, see :ref:`physical-object-api`
(This is a proxy for :class:`~yt.data_objects.data_containers.AMRStreamlineBase`.)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -44,7 +44,7 @@
"tmpdir = tempfile.mkdtemp()\n",
"\n",
"# Load the data set with the full simulation information\n",
- "data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')"
+ "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
],
"language": "python",
"metadata": {},
@@ -62,7 +62,7 @@
"collapsed": false,
"input": [
"# Load the rockstar data files\n",
- "halos_pf = load('rockstar_halos/halos_0.0.bin')"
+ "halos_ds = load('rockstar_halos/halos_0.0.bin')"
],
"language": "python",
"metadata": {},
@@ -80,7 +80,7 @@
"collapsed": false,
"input": [
"# Instantiate a catalog using those two paramter files\n",
- "hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf, \n",
+ "hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds, \n",
" output_dir=os.path.join(tmpdir, 'halo_catalog'))"
],
"language": "python",
@@ -295,9 +295,9 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "halos_pf = load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
+ "halos_ds = load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
"\n",
- "hc_reloaded = HaloCatalog(halos_pf=halos_pf,\n",
+ "hc_reloaded = HaloCatalog(halos_ds=halos_ds,\n",
" output_dir=os.path.join(tmpdir, 'halo_catalog'))"
],
"language": "python",
@@ -407,4 +407,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -222,7 +222,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "pf = load(\"cube.fits\")"
+ "ds = load(\"cube.fits\")"
],
"language": "python",
"metadata": {},
@@ -233,7 +233,7 @@
"collapsed": false,
"input": [
"# Specifying no center gives us the center slice\n",
- "slc = SlicePlot(pf, \"z\", [\"density\"])\n",
+ "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.show()"
],
"language": "python",
@@ -246,9 +246,9 @@
"input": [
"import yt.units as u\n",
"# Picking different velocities for the slices\n",
- "new_center = pf.domain_center\n",
- "new_center[2] = pf.spec2pixel(-1.0*u.km/u.s)\n",
- "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+ "new_center = ds.domain_center\n",
+ "new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
+ "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -259,8 +259,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "new_center[2] = pf.spec2pixel(0.7*u.km/u.s)\n",
- "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+ "new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
+ "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -271,8 +271,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "new_center[2] = pf.spec2pixel(-0.3*u.km/u.s)\n",
- "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+ "new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
+ "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -290,7 +290,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "prj = ProjectionPlot(pf, \"z\", [\"density\"], proj_style=\"sum\")\n",
+ "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
"prj.set_log(\"density\", True)\n",
"prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
"prj.show()"
@@ -303,4 +303,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst Sun Jun 15 19:50:51 2014 -0700
@@ -84,8 +84,8 @@
from yt.mods import *
- pf = load("DD0000")
- sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+ ds = load("DD0000")
+ sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
treecode=True, opening_angle=2.0)
@@ -97,8 +97,8 @@
from yt.mods import *
- pf = load("DD0000")
- sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+ ds = load("DD0000")
+ sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
treecode=False)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst Sun Jun 15 19:50:51 2014 -0700
@@ -58,8 +58,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf=load('Enzo_64/RD0006/RedshiftOutput0006')
- halo_list = parallelHF(pf)
+ ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+ halo_list = parallelHF(ds)
halo_list.dump('MyHaloList')
Ellipsoid Parameters
@@ -69,8 +69,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf=load('Enzo_64/RD0006/RedshiftOutput0006')
- haloes = LoadHaloes(pf, 'MyHaloList')
+ ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+ haloes = LoadHaloes(ds, 'MyHaloList')
Once the halo information is saved you can load it into the data
object "haloes", you can get loop over the list of haloes and do
@@ -107,7 +107,7 @@
.. code-block:: python
- ell = pf.ellipsoid(ell_param[0],
+ ell = ds.ellipsoid(ell_param[0],
ell_param[1],
ell_param[2],
ell_param[3],
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst Sun Jun 15 19:50:51 2014 -0700
@@ -9,7 +9,7 @@
backwards compatible in that output from old halo finders may be loaded.
A catalog of halos can be created from any initial dataset given to halo
-catalog through data_pf. These halos can be found using friends-of-friends,
+catalog through data_ds. These halos can be found using friends-of-friends,
HOP, and Rockstar. The finder_method keyword dictates which halo finder to
use. The available arguments are 'fof', 'hop', and'rockstar'. For more
details on the relative differences between these halo finders see
@@ -19,32 +19,32 @@
from yt.mods import *
from yt.analysis_modules.halo_analysis.api import HaloCatalog
- data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
- hc = HaloCatalog(data_pf=data_pf, finder_method='hop')
+ data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+ hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
A halo catalog may also be created from already run rockstar outputs.
This method is not implemented for previously run friends-of-friends or
HOP finders. Even though rockstar creates one file per processor,
specifying any one file allows the full catalog to be loaded. Here we
only specify the file output by the processor with ID 0. Note that the
-argument for supplying a rockstar output is `halos_pf`, not `data_pf`.
+argument for supplying a rockstar output is `halos_ds`, not `data_ds`.
.. code-block:: python
- halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
- hc = HaloCatalog(halos_pf=halos_pf)
+ halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+ hc = HaloCatalog(halos_ds=halos_ds)
Although supplying only the binary output of the rockstar halo finder
is sufficient for creating a halo catalog, it is not possible to find
any new information about the identified halos. To associate the halos
with the dataset from which they were found, supply arguments to both
-halos_pf and data_pf.
+halos_ds and data_ds.
.. code-block:: python
- halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
- data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
- hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf)
+ halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+ data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+ hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)
A data container can also be supplied via keyword data_source,
associated with either dataset, to control the spatial region in
@@ -215,8 +215,8 @@
.. code-block:: python
- hpf = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
- hc = HaloCatalog(halos_pf=hpf,
+ hds = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
+ hc = HaloCatalog(halos_ds=hds,
output_dir="halo_catalogs/catalog_0046")
hc.add_callback("load_profiles", output_dir="profiles",
filename="virial_profiles")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst Sun Jun 15 19:50:51 2014 -0700
@@ -60,8 +60,8 @@
from yt.mods import *
from yt.analysis_modules.halo_mass_function.api import *
- pf = load("data0030")
- hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out", num_sigma_bins=200,
+ ds = load("data0030")
+ hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out", num_sigma_bins=200,
mass_column=5)
Attached to ``hmf`` is the convenience function ``write_out``, which saves
@@ -102,8 +102,8 @@
from yt.mods import *
from yt.analysis_modules.halo_mass_function.api import *
- pf = load("data0030")
- hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out",
+ ds = load("data0030")
+ hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out",
sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
fitting_function=4)
hmf.write_out(prefix='hmf')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/halo_profiling.rst
--- a/doc/source/analyzing/analysis_modules/halo_profiling.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/halo_profiling.rst Sun Jun 15 19:50:51 2014 -0700
@@ -395,8 +395,8 @@
def find_min_temp_dist(sphere):
old = sphere.center
ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
- d = sphere.pf['kpc'] * periodic_dist(old, [mx, my, mz],
- sphere.pf.domain_right_edge - sphere.pf.domain_left_edge)
+ d = sphere.ds['kpc'] * periodic_dist(old, [mx, my, mz],
+ sphere.ds.domain_right_edge - sphere.ds.domain_left_edge)
# If new center farther than 5 kpc away, don't recenter
if d > 5.: return [-1, -1, -1]
return [mx,my,mz]
@@ -426,7 +426,7 @@
128, 'temperature', 1e2, 1e7, True,
end_collect=False)
my_profile.add_fields('cell_mass', weight=None, fractional=False)
- my_filename = os.path.join(sphere.pf.fullpath, '2D_profiles',
+ my_filename = os.path.join(sphere.ds.fullpath, '2D_profiles',
'Halo_%04d.h5' % halo['id'])
my_profile.write_out_h5(my_filename)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/hmf_howto.rst
--- a/doc/source/analyzing/analysis_modules/hmf_howto.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/hmf_howto.rst Sun Jun 15 19:50:51 2014 -0700
@@ -27,8 +27,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("data0001")
- halo_list = HaloFinder(pf)
+ ds = load("data0001")
+ halo_list = HaloFinder(ds)
halo_list.write_out("HopAnalysis.out")
The only important columns of data in the text file ``HopAnalysis.out``
@@ -79,8 +79,8 @@
from yt.mods import *
from yt.analysis_modules.halo_mass_function.api import *
- pf = load("data0001")
- hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out",
+ ds = load("data0001")
+ hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out",
sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
fitting_function=4, mass_column=5, num_sigma_bins=200)
hmf.write_out(prefix='hmf')
@@ -107,9 +107,9 @@
from yt.analysis_modules.halo_mass_function.api import *
# If desired, start loop here.
- pf = load("data0001")
+ ds = load("data0001")
- halo_list = HaloFinder(pf)
+ halo_list = HaloFinder(ds)
halo_list.write_out("HopAnalysis.out")
hp = HP.HaloProfiler("data0001", halo_list_file='HopAnalysis.out')
@@ -120,7 +120,7 @@
virial_quantities=['TotalMassMsun','RadiusMpc'])
hp.make_profiles(filename="VirialHaloes.out")
- hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out",
+ hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out",
sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
fitting_function=4, mass_column=5, num_sigma_bins=200)
hmf.write_out(prefix='hmf')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst Sun Jun 15 19:50:51 2014 -0700
@@ -65,7 +65,7 @@
gathering datasets for time series. Default: True.
* **set_parameters** (*dict*): Dictionary of parameters to attach to
- pf.parameters. Default: None.
+ ds.parameters. Default: None.
* **output_dir** (*string*): The directory in which images and data files
will be written. Default: 'LC'.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst Sun Jun 15 19:50:51 2014 -0700
@@ -43,7 +43,7 @@
.. code:: python
- pf = load("MHDSloshing/virgo_low_res.0054.vtk",
+ ds = load("MHDSloshing/virgo_low_res.0054.vtk",
parameters={"time_unit":(1.0,"Myr"),
"length_unit":(1.0,"Mpc"),
"mass_unit":(1.0e14,"Msun")})
@@ -418,7 +418,7 @@
evacuated two "bubbles" of radius 30 kpc at a distance of 50 kpc from
the center.
-Now, we create a parameter file out of this dataset:
+Now, we create a yt Dataset object out of this dataset:
.. code:: python
@@ -440,7 +440,7 @@
.. code:: python
- sphere = ds.sphere(pf.domain_center, (1.0,"Mpc"))
+ sphere = ds.sphere(ds.domain_center, (1.0,"Mpc"))
A = 6000.
exp_time = 2.0e5
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/radial_column_density.rst Sun Jun 15 19:50:51 2014 -0700
@@ -41,15 +41,15 @@
from yt.mods import *
from yt.analysis_modules.radial_column_density.api import *
- pf = load("data0030")
+ ds = load("data0030")
- rcdnumdens = RadialColumnDensity(pf, 'NumberDensity', [0.5, 0.5, 0.5],
+ rcdnumdens = RadialColumnDensity(ds, 'NumberDensity', [0.5, 0.5, 0.5],
max_radius = 0.5)
def _RCDNumberDensity(field, data, rcd = rcdnumdens):
return rcd._build_derived_field(data)
add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
- dd = pf.h.all_data()
+ dd = ds.all_data()
print dd['RCDNumberDensity']
The field ``RCDNumberDensity`` can be used just like any other derived field
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/radmc3d_export.rst
--- a/doc/source/analyzing/analysis_modules/radmc3d_export.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/radmc3d_export.rst Sun Jun 15 19:50:51 2014 -0700
@@ -41,8 +41,8 @@
.. code-block:: python
- pf = load("galaxy0030/galaxy0030")
- writer = RadMC3DWriter(pf)
+ ds = load("galaxy0030/galaxy0030")
+ writer = RadMC3DWriter(ds)
writer.write_amr_grid()
writer.write_dust_file("DustDensity", "dust_density.inp")
@@ -87,8 +87,8 @@
return (x_co/mu_h)*data["density"]
add_field("NumberDensityCO", function=_NumberDensityCO)
- pf = load("galaxy0030/galaxy0030")
- writer = RadMC3DWriter(pf)
+ ds = load("galaxy0030/galaxy0030")
+ writer = RadMC3DWriter(ds)
writer.write_amr_grid()
writer.write_line_file("NumberDensityCO", "numberdens_co.inp")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/running_halofinder.rst
--- a/doc/source/analyzing/analysis_modules/running_halofinder.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/running_halofinder.rst Sun Jun 15 19:50:51 2014 -0700
@@ -57,8 +57,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- halo_list = HaloFinder(pf)
+ ds = load("data0001")
+ halo_list = HaloFinder(ds)
Running FoF is similar:
@@ -66,8 +66,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- halo_list = FOFHaloFinder(pf)
+ ds = load("data0001")
+ halo_list = FOFHaloFinder(ds)
Halo Data Access
----------------
@@ -172,8 +172,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- haloes = HaloFinder(pf)
+ ds = load("data0001")
+ haloes = HaloFinder(ds)
haloes.dump("basename")
It is easy to load the halos using the ``LoadHaloes`` class:
@@ -182,8 +182,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- haloes = LoadHaloes(pf, "basename")
+ ds = load("data0001")
+ haloes = LoadHaloes(ds, "basename")
Everything that can be done with ``haloes`` in the first example should be
possible with ``haloes`` in the second.
@@ -229,10 +229,10 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- halo_list = HaloFinder(pf,padding=0.02)
+ ds = load("data0001")
+ halo_list = HaloFinder(ds,padding=0.02)
# --or--
- halo_list = FOFHaloFinder(pf,padding=0.02)
+ halo_list = FOFHaloFinder(ds,padding=0.02)
The ``padding`` parameter is in simulation units and defaults to 0.02. This parameter is how much padding
is added to each of the six sides of a subregion. This value should be 2x-3x larger than the largest
@@ -343,8 +343,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- halo_list = parallelHF(pf)
+ ds = load("data0001")
+ halo_list = parallelHF(ds)
Parallel HOP has these user-set options:
@@ -421,8 +421,8 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load("data0001")
- halo_list = parallelHF(pf, threshold=80.0, dm_only=True, resize=False,
+ ds = load("data0001")
+ halo_list = parallelHF(ds, threshold=80.0, dm_only=True, resize=False,
rearrange=True, safety=1.5, premerge=True)
halo_list.write_out("ParallelHopAnalysis.out")
halo_list.write_particle_list("parts")
@@ -445,11 +445,11 @@
from yt.mods import *
from yt.analysis_modules.halo_finding.api import *
- pf = load('data0458')
+ ds = load('data0458')
# Note that the first term below, [0.5]*3, defines the center of
# the region and is not used. It can be any value.
- sv = pf.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
- halos = HaloFinder(pf, subvolume = sv)
+ sv = ds.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
+ halos = HaloFinder(ds, subvolume = sv)
halos.write_out("sv.out")
@@ -522,7 +522,7 @@
the width of the smallest grid element in the simulation from the
last data snapshot (i.e. the one where time has evolved the
longest) in the time series:
- ``pf_last.index.get_smallest_dx() * pf_last['mpch']``.
+ ``ds_last.index.get_smallest_dx() * ds_last['mpch']``.
* ``total_particles``, if supplied, this is a pre-calculated
total number of dark matter
particles present in the simulation. For example, this is useful
@@ -544,21 +544,21 @@
out*list) and binary (halo*bin) files inside the ``outbase`` directory.
We use the halo list classes to recover the information.
-Inside the ``outbase`` directory there is a text file named ``pfs.txt``
-that records the connection between pf names and the Rockstar file names.
+Inside the ``outbase`` directory there is a text file named ``datasets.txt``
+that records the connection between ds names and the Rockstar file names.
The halo list can be automatically generated from the RockstarHaloFinder
object by calling ``RockstarHaloFinder.halo_list()``. Alternatively, the halo
lists can be built from the RockstarHaloList class directly
-``LoadRockstarHalos(pf,'outbase/out_0.list')``.
+``LoadRockstarHalos(ds,'outbase/out_0.list')``.
.. code-block:: python
- rh = RockstarHaloFinder(pf)
+ rh = RockstarHaloFinder(ds)
#First method of creating the halo lists:
halo_list = rh.halo_list()
#Alternate method of creating halo_list:
- halo_list = LoadRockstarHalos(pf, 'rockstar_halos/out_0.list')
+ halo_list = LoadRockstarHalos(ds, 'rockstar_halos/out_0.list')
The above ``halo_list`` is very similar to any other list of halos loaded off
disk.
@@ -624,18 +624,18 @@
def main():
import enzo
- pf = EnzoDatasetInMemory()
+ ds = EnzoDatasetInMemory()
mine = ytcfg.getint('yt','__topcomm_parallel_rank')
size = ytcfg.getint('yt','__topcomm_parallel_size')
# Call rockstar.
- ts = DatasetSeries([pf])
- outbase = "./rockstar_halos_%04d" % pf['NumberOfPythonTopGridCalls']
+ ts = DatasetSeries([ds])
+ outbase = "./rockstar_halos_%04d" % ds['NumberOfPythonTopGridCalls']
rh = RockstarHaloFinder(ts, num_readers = size,
outbase = outbase)
rh.run()
# Load the halos off disk.
fname = outbase + "/out_0.list"
- rhalos = LoadRockstarHalos(pf, fname)
+ rhalos = LoadRockstarHalos(ds, fname)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/star_analysis.rst
--- a/doc/source/analyzing/analysis_modules/star_analysis.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/star_analysis.rst Sun Jun 15 19:50:51 2014 -0700
@@ -27,9 +27,9 @@
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
- dd = pf.h.all_data()
- sfr = StarFormationRate(pf, data_source=dd)
+ ds = load("data0030")
+ dd = ds.all_data()
+ sfr = StarFormationRate(ds, data_source=dd)
or just a small part of the volume:
@@ -37,9 +37,9 @@
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
+ ds = load("data0030")
sp = p.h.sphere([0.5,0.5,0.5], 0.05)
- sfr = StarFormationRate(pf, data_source=sp)
+ sfr = StarFormationRate(ds, data_source=sp)
If the stars to be analyzed cannot be defined by a data_source, arrays can be
passed. In this case, the units for the ``star_mass`` must be in Msun,
@@ -51,8 +51,8 @@
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
- re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+ ds = load("data0030")
+ re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
# This puts the particle data for *all* the particles in the region re
# into the arrays sm and ct.
sm = re["ParticleMassMsun"]
@@ -65,7 +65,7 @@
# 100 is a time in code units.
sm_old = sm[ct < 100]
ct_old = ct[ct < 100]
- sfr = StarFormationRate(pf, star_mass=sm_old, star_creation_time=ct_old,
+ sfr = StarFormationRate(ds, star_mass=sm_old, star_creation_time=ct_old,
volume=re.volume('mpc'))
To output the data to a text file, use the command ``.write_out``:
@@ -139,8 +139,8 @@
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
- spec = SpectrumBuilder(pf, bcdir="/home/username/bc/", model="chabrier")
+ ds = load("data0030")
+ spec = SpectrumBuilder(ds, bcdir="/home/username/bc/", model="chabrier")
In order to analyze a set of stars, use the ``calculate_spectrum`` command.
It accepts either a ``data_source``, or a set of arrays with the star
@@ -148,7 +148,7 @@
.. code-block:: python
- re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+ re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
spec.calculate_spectrum(data_source=re)
If a subset of stars are desired, call it like this. ``star_mass`` is in units
@@ -157,7 +157,7 @@
.. code-block:: python
- re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+ re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
# This puts the particle data for *all* the particles in the region re
# into the arrays sm, ct and metal.
sm = re["ParticleMassMsun"]
@@ -223,14 +223,14 @@
Below is an example of an absurd SED for universe-old stars all with
solar metallicity at a redshift of zero. Note that even in this example,
-a ``pf`` is required.
+a ``ds`` is required.
.. code-block:: python
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
- spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="chabrier")
+ ds = load("data0030")
+ spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="chabrier")
sm = np.ones(100)
ct = np.zeros(100)
spec.calculate_spectrum(star_mass=sm, star_creation_time=ct, star_metallicity_constant=0.02)
@@ -252,11 +252,11 @@
from yt.mods import *
from yt.analysis_modules.star_analysis.api import *
- pf = load("data0030")
+ ds = load("data0030")
# Find all the haloes, and include star particles.
- haloes = HaloFinder(pf, dm_only=False)
+ haloes = HaloFinder(ds, dm_only=False)
# Set up the spectrum builder.
- spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="salpeter")
+ spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="salpeter")
# Iterate over the haloes.
for halo in haloes:
# Get the pertinent arrays.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/sunrise_export.rst
--- a/doc/source/analyzing/analysis_modules/sunrise_export.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/sunrise_export.rst Sun Jun 15 19:50:51 2014 -0700
@@ -18,15 +18,15 @@
from yt.mods import *
import numpy as na
- pf = ARTDataset(file_amr)
- potential_value,center=pf.h.find_min('Potential_New')
- root_cells = pf.domain_dimensions[0]
+ ds = ARTDataset(file_amr)
+ potential_value,center=ds.find_min('Potential_New')
+ root_cells = ds.domain_dimensions[0]
le = np.floor(root_cells*center) #left edge
re = np.ceil(root_cells*center) #right edge
bounds = [(le[0], re[0]-le[0]), (le[1], re[1]-le[1]), (le[2], re[2]-le[2])]
#bounds are left edge plus a span
bounds = numpy.array(bounds,dtype='int')
- amods.sunrise_export.export_to_sunrise(pf, out_fits_file,subregion_bounds = bounds)
+ amods.sunrise_export.export_to_sunrise(ds, out_fits_file,subregion_bounds = bounds)
To ensure that the camera is centered on the galaxy, we find the center by finding the minimum of the gravitational potential. The above code takes that center, and casts it in terms of which root cells should be extracted. At the moment, Sunrise accepts a strict octree, and you can only extract a 2x2x2 domain on the root grid, and not an arbitrary volume. See the optimization section later for workarounds. On my reasonably recent machine, the export process takes about 30 minutes.
@@ -51,7 +51,7 @@
col_list.append(pyfits.Column("L_bol", format="D",array=np.zeros(mass_current.size)))
cols = pyfits.ColDefs(col_list)
- amods.sunrise_export.export_to_sunrise(pf, out_fits_file,write_particles=cols,
+ amods.sunrise_export.export_to_sunrise(ds, out_fits_file,write_particles=cols,
subregion_bounds = bounds)
This code snippet takes the stars in a region outlined by the ``bounds`` variable, organizes them into pyfits columns which are then passed to export_to_sunrise. Note that yt units are in CGS, and Sunrise accepts units in (physical) kpc, kelvin, solar masses, and years.
@@ -68,8 +68,8 @@
.. code-block:: python
for x,a in enumerate(zip(pos,age)): #loop over stars
- center = x*pf['kpc']
- grid,idx = find_cell(pf.index.grids[0],center)
+ center = x*ds['kpc']
+ grid,idx = find_cell(ds.index.grids[0],center)
pk[i] = grid['Pk'][idx]
This code is how Sunrise calculates the pressure, so we can add our own derived field:
@@ -79,7 +79,7 @@
def _Pk(field,data):
#calculate pressure over Boltzmann's constant: P/k=(n/V)T
#Local stellar ISM values are ~16500 Kcm^-3
- vol = data['cell_volume'].astype('float64')*data.pf['cm']**3.0 #volume in cm
+ vol = data['cell_volume'].astype('float64')*data.ds['cm']**3.0 #volume in cm
m_g = data["cell_mass"]*1.988435e33 #mass of H in g
n_g = m_g*5.97e23 #number of H atoms
teff = data["temperature"]
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst Sun Jun 15 19:50:51 2014 -0700
@@ -35,7 +35,7 @@
from yt.mods import *
from yt.analysis_modules.two_point_functions.api import *
- pf = load("data0005")
+ ds = load("data0005")
# Calculate the S in RMS velocity difference between the two points.
# All functions have five inputs. The first two are containers
@@ -55,7 +55,7 @@
# the number of pairs of points to calculate, how big a data queue to
# use, the range of pair separations and how many lengths to use,
# and how to divide that range (linear or log).
- tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z"],
+ tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z"],
total_values=1e5, comm_size=10000,
length_number=10, length_range=[1./128, .5],
length_type="log")
@@ -90,7 +90,7 @@
from yt.mods import *
...
- tpf = amods.two_point_functions.TwoPointFunctions(pf, ...)
+ tpf = amods.two_point_functions.TwoPointFunctions(ds, ...)
Probability Distribution Function
@@ -261,12 +261,12 @@
Before any functions can be added, the ``TwoPointFunctions`` object needs
to be created. It has these inputs:
- * ``pf`` (the only required input and is always the first term).
+ * ``ds`` (the only required input and is always the first term).
* Field list, required, an ordered list of field names used by the
functions. The order in this list will need to be referenced when writing
functions. Derived fields may be used here if they are defined first.
* ``left_edge``, ``right_edge``, three-element lists of floats:
- Used to define a sub-region of the full volume in which to run the TPF.
+ Used to define a sub-region of the full volume in which to run the TDS.
Default=None, which is equivalent to running on the full volume. Both must
be set to have any effect.
* ``total_values``, integer: The number of random points to generate globally
@@ -298,7 +298,7 @@
guarantees that the point pairs will be in different cells for the most
refined regions.
If the first term of the list is -1, the minimum length will be automatically
- set to sqrt(3)*dx, ex: ``length_range = [-1, 10/pf['kpc']]``.
+ set to sqrt(3)*dx, ex: ``length_range = [-1, 10/ds['kpc']]``.
* ``vol_ratio``, integer: How to multiply-assign subvolumes to the parallel
tasks. This number must be an integer factor of the total number of tasks or
very bad things will happen. The default value of 1 will assign one task
@@ -639,7 +639,7 @@
return vdiff
...
- tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+ tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
total_values=1e5, comm_size=10000,
length_number=10, length_range=[1./128, .5],
length_type="log")
@@ -667,7 +667,7 @@
from yt.mods import *
from yt.analysis_modules.two_point_functions.api import *
- pf = load("data0005")
+ ds = load("data0005")
# Calculate the S in RMS velocity difference between the two points.
# Also store the ratio of densities (keeping them >= 1).
@@ -688,7 +688,7 @@
# Set the number of pairs of points to calculate, how big a data queue to
# use, the range of pair separations and how many lengths to use,
# and how to divide that range (linear or log).
- tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+ tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
total_values=1e5, comm_size=10000,
length_number=10, length_range=[1./128, .5],
length_type="log")
@@ -765,7 +765,7 @@
from yt.analysis_modules.two_point_functions.api import *
# Specify the dataset on which we want to base our work.
- pf = load('data0005')
+ ds = load('data0005')
# Read in the halo centers of masses.
CoM = []
@@ -787,7 +787,7 @@
# For technical reasons (hopefully to be fixed someday) `vol_ratio`
# needs to be equal to the number of tasks used if this is run
# in parallel. A value of -1 automatically does this.
- tpf = TwoPointFunctions(pf, ['x'],
+ tpf = TwoPointFunctions(ds, ['x'],
total_values=1e7, comm_size=10000,
length_number=11, length_range=[2*radius, .5],
length_type="lin", vol_ratio=-1)
@@ -868,11 +868,11 @@
from yt.analysis_modules.two_point_functions.api import *
# Specify the dataset on which we want to base our work.
- pf = load('data0005')
+ ds = load('data0005')
# We work in simulation's units, these are for conversion.
- vol_conv = pf['cm'] ** 3
- sm = pf.index.get_smallest_dx()**3
+ vol_conv = ds['cm'] ** 3
+ sm = ds.index.get_smallest_dx()**3
# Our density limit, in gm/cm**3
dens = 2e-31
@@ -887,13 +887,13 @@
return d.sum()
add_quantity("TotalNumDens", function=_NumDens,
combine_function=_combNumDens, n_ret=1)
- all = pf.h.all_data()
+ all = ds.all_data()
n = all.quantities["TotalNumDens"]()
print n,'n'
# Instantiate our TPF object.
- tpf = TwoPointFunctions(pf, ['density', 'cell_volume'],
+ tpf = TwoPointFunctions(ds, ['density', 'cell_volume'],
total_values=1e5, comm_size=10000,
length_number=11, length_range=[-1, .5],
length_type="lin", vol_ratio=1)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/analysis_modules/xray_emission_fields.rst
--- a/doc/source/analyzing/analysis_modules/xray_emission_fields.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/analysis_modules/xray_emission_fields.rst Sun Jun 15 19:50:51 2014 -0700
@@ -60,10 +60,10 @@
add_xray_emissivity_field(0.5, 7)
add_xray_photon_emissivity_field(0.5, 7)
- pf = load("enzo_tiny_cosmology/DD0046/DD0046")
- plot = SlicePlot(pf, 'x', 'Xray_Luminosity_0.5_7keV')
+ ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ plot = SlicePlot(ds, 'x', 'Xray_Luminosity_0.5_7keV')
plot.save()
- plot = ProjectionPlot(pf, 'x', 'Xray_Emissivity_0.5_7keV')
+ plot = ProjectionPlot(ds, 'x', 'Xray_Emissivity_0.5_7keV')
plot.save()
- plot = ProjectionPlot(pf, 'x', 'Xray_Photon_Emissivity_0.5_7keV')
+ plot = ProjectionPlot(ds, 'x', 'Xray_Photon_Emissivity_0.5_7keV')
plot.save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/creating_derived_fields.rst Sun Jun 15 19:50:51 2014 -0700
@@ -20,11 +20,11 @@
.. code-block:: python
def _Pressure(field, data):
- return (data.pf["Gamma"] - 1.0) * \
+ return (data.ds.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
Note that we do a couple different things here. We access the "Gamma"
-parameter from the parameter file, we access the "density" field and we access
+parameter from the dataset, we access the "density" field and we access
the "thermal_energy" field. "thermal_energy" is, in fact, another derived field!
("thermal_energy" deals with the distinction in storage of energy between dual
energy formalism and non-DEF.) We don't do any loops, we don't do any
@@ -87,14 +87,14 @@
.. code-block:: python
>>> from yt.mods import *
- >>> pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
- >>> pf.field_list
+ >>> ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ >>> ds.field_list
['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
- >>> pf.field_info['dens']._units
+ >>> ds.field_info['dens']._units
'\\rm{g}/\\rm{cm}^{3}'
- >>> pf.field_info['temp']._units
+ >>> ds.field_info['temp']._units
'\\rm{K}'
- >>> pf.field_info['velx']._units
+ >>> ds.field_info['velx']._units
'\\rm{cm}/\\rm{s}'
Thus if you were using any of these fields as input to your derived field, you
@@ -178,7 +178,7 @@
def _DivV(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -189,11 +189,11 @@
ds = div_fac * data['dx'].flat[0]
f = data["velocity_x"][sl_right,1:-1,1:-1]/ds
f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
ds = div_fac * data['dy'].flat[0]
f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
ds = div_fac * data['dz'].flat[0]
f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
@@ -241,8 +241,8 @@
return data["temperature"]*data["density"]**(-2./3.)
add_field("Entr", function=_Entropy)
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(pf, "Entr")
+ ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ writer.save_field(ds, "Entr")
This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
@@ -250,8 +250,8 @@
from yt.mods import *
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- data = pf.h.all_data()
+ ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ data = ds.all_data()
print data["Entr"]
you can work with the field exactly as before, without having to recompute it.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/external_analysis.rst
--- a/doc/source/analyzing/external_analysis.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/external_analysis.rst Sun Jun 15 19:50:51 2014 -0700
@@ -18,10 +18,10 @@
from yt.mods import *
import radtrans
- pf = load("DD0010/DD0010")
+ ds = load("DD0010/DD0010")
rt_grids = []
- for grid in pf.index.grids:
+ for grid in ds.index.grids:
rt_grid = radtrans.RegularBox(
grid.LeftEdge, grid.RightEdge,
grid["density"], grid["temperature"], grid["metallicity"])
@@ -39,8 +39,8 @@
from yt.mods import *
import pop_synthesis
- pf = load("DD0010/DD0010")
- dd = pf.h.all_data()
+ ds = load("DD0010/DD0010")
+ dd = ds.all_data()
star_masses = dd["StarMassMsun"]
star_metals = dd["StarMetals"]
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/generating_processed_data.rst
--- a/doc/source/analyzing/generating_processed_data.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/generating_processed_data.rst Sun Jun 15 19:50:51 2014 -0700
@@ -43,7 +43,7 @@
.. code-block:: python
- sl = pf.slice(0, 0.5)
+ sl = ds.slice(0, 0.5)
frb = FixedResolutionBuffer(sl, (0.3, 0.5, 0.6, 0.8), (512, 512))
my_image = frb["density"]
@@ -98,7 +98,7 @@
.. code-block:: python
- source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+ source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
profile = BinnedProfile1D(source, 128, "density", 1e-24, 1e-10)
profile.add_fields("cell_mass", weight = None)
profile.add_fields("temperature")
@@ -128,7 +128,7 @@
.. code-block:: python
- source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+ source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
prof2d = BinnedProfile2D(source, 128, "density", 1e-24, 1e-10, True,
128, "temperature", 10, 10000, True)
prof2d.add_fields("cell_mass", weight = None)
@@ -171,7 +171,7 @@
.. code-block:: python
- ray = pf.ray( (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
+ ray = ds.ray( (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
print ray["density"]
The points are ordered, but the ray is also traversing cells of varying length,
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/ionization_cube.py
--- a/doc/source/analyzing/ionization_cube.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/ionization_cube.py Sun Jun 15 19:50:51 2014 -0700
@@ -13,9 +13,9 @@
ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
t1 = time.time()
-for pf in ts.piter():
- z = pf.current_redshift
- for g in parallel_objects(pf.index.grids, njobs = 16):
+for ds in ts.piter():
+ z = ds.current_redshift
+ for g in parallel_objects(ds.index.grids, njobs = 16):
i1, j1, k1 = g.get_global_startindex() # Index into our domain
i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions
# Look for the newly ionized gas
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/objects.rst Sun Jun 15 19:50:51 2014 -0700
@@ -26,7 +26,7 @@
while Enzo calls it "temperature". Translator functions ensure that any
derived field relying on "temp" or "temperature" works with both output types.
-When a field is requested, the parameter file first looks to see if that field
+When a field is requested, the dataset object first looks to see if that field
exists on disk. If it does not, it then queries the list of code-specific
derived fields. If it finds nothing there, it then defaults to examining the
global set of derived fields.
@@ -82,7 +82,7 @@
.. code-block:: python
- sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+ sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
and then look at the temperature of its cells within it via:
@@ -105,25 +105,25 @@
.. code-block:: python
- pf = load("my_data")
- print pf.field_list
- print pf.derived_field_list
+ ds = load("my_data")
+ print ds.field_list
+ print ds.derived_field_list
When a field is added, it is added to a container that hangs off of the
-parameter file, as well. All of the field creation options
+dataset, as well. All of the field creation options
(:ref:`derived-field-options`) are accessible through this object:
.. code-block:: python
- pf = load("my_data")
- print pf.field_info["pressure"].get_units()
+ ds = load("my_data")
+ print ds.field_info["pressure"].get_units()
This is a fast way to examine the units of a given field, and additionally you
can use :meth:`yt.utilities.pydot.get_source` to get the source code:
.. code-block:: python
- field = pf.field_info["pressure"]
+ field = ds.field_info["pressure"]
print field.get_source()
.. _available-objects:
@@ -142,8 +142,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("RedshiftOutput0005")
- reg = pf.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
+ ds = load("RedshiftOutput0005")
+ reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
.. include:: _obj_docstrings.inc
@@ -192,8 +192,8 @@
.. code-block:: python
- pf = load("my_data")
- dd = pf.h.all_data()
+ ds = load("my_data")
+ dd = ds.all_data()
dd.quantities["AngularMomentumVector"]()
The following quantities are available via the ``quantities`` interface.
@@ -264,10 +264,10 @@
.. python-script::
from yt.mods import *
- pf = load("enzo_tiny_cosmology/DD0046/DD0046")
- ad = pf.h.all_data()
+ ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ ad = ds.all_data()
new_region = ad.cut_region(['obj["density"] > 1e-29'])
- plot = ProjectionPlot(pf, "x", "density", weight_field="density",
+ plot = ProjectionPlot(ds, "x", "density", weight_field="density",
data_source=new_region)
plot.save()
@@ -291,7 +291,7 @@
.. code-block:: python
- sp = pf.sphere("max", (1.0, 'pc'))
+ sp = ds.sphere("max", (1.0, 'pc'))
contour_values, connected_sets = sp.extract_connected_sets(
"density", 3, 1e-30, 1e-20)
@@ -355,12 +355,12 @@
construction of the objects is the difficult part, rather than the generation
of the data -- this means that you can save out an object as a description of
how to recreate it in space, but not the actual data arrays affiliated with
-that object. The information that is saved includes the parameter file off of
+that object. The information that is saved includes the dataset off of
which the object "hangs." It is this piece of information that is the most
difficult; the object, when reloaded, must be able to reconstruct a parameter
file from whatever limited information it has in the save file.
-To do this, ``yt`` is able to identify parameter files based on a "hash"
+To do this, ``yt`` is able to identify datasets based on a "hash"
generated from the base file name, the "CurrentTimeIdentifier", and the
simulation time. These three characteristics should never be changed outside
of a simulation, they are independent of the file location on disk, and in
@@ -374,10 +374,10 @@
.. code-block:: python
from yt.mods import *
- pf = load("my_data")
- sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+ ds = load("my_data")
+ sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
- pf.h.save_object(sp, "sphere_to_analyze_later")
+ ds.save_object(sp, "sphere_to_analyze_later")
In a later session, we can load it using
@@ -387,8 +387,8 @@
from yt.mods import *
- pf = load("my_data")
- sphere_to_analyze = pf.h.load_object("sphere_to_analyze_later")
+ ds = load("my_data")
+ sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
Additionally, if we want to store the object independent of the ``.yt`` file,
we can save the object directly:
@@ -397,8 +397,8 @@
from yt.mods import *
- pf = load("my_data")
- sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+ ds = load("my_data")
+ sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
sp.save_object("my_sphere", "my_storage_file.cpkl")
@@ -414,10 +414,10 @@
from yt.mods import *
import shelve
- pf = load("my_data") # not necessary if storeparameterfiles is on
+ ds = load("my_data") # not necessary if storeparameterfiles is on
obj_file = shelve.open("my_storage_file.cpkl")
- pf, obj = obj_file["my_sphere"]
+ ds, obj = obj_file["my_sphere"]
If you have turned on ``storeparameterfiles`` in your configuration,
you won't need to load the parameterfile again, as the load process
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/parallel_computation.rst Sun Jun 15 19:50:51 2014 -0700
@@ -86,10 +86,10 @@
.. code-block:: python
from yt.pmods import *
- pf = load("RD0035/RedshiftOutput0035")
- v, c = pf.h.find_max("density")
+ ds = load("RD0035/RedshiftOutput0035")
+ v, c = ds.find_max("density")
print v, c
- p = ProjectionPlot(pf, "x", "density")
+ p = ProjectionPlot(ds, "x", "density")
p.save()
If this script is run in parallel, two of the most expensive operations -
@@ -127,9 +127,9 @@
.. code-block:: python
from yt.pmods import *
- pf = load("RD0035/RedshiftOutput0035")
- v, c = pf.h.find_max("density")
- p = ProjectionPlot(pf, "x", "density")
+ ds = load("RD0035/RedshiftOutput0035")
+ v, c = ds.find_max("density")
+ p = ProjectionPlot(ds, "x", "density")
if is_root():
print v, c
p.save()
@@ -151,9 +151,9 @@
print v, c
plot.save()
- pf = load("RD0035/RedshiftOutput0035")
- v, c = pf.h.find_max("density")
- p = ProjectionPlot(pf, "x", "density")
+ ds = load("RD0035/RedshiftOutput0035")
+ v, c = ds.find_max("density")
+ p = ProjectionPlot(ds, "x", "density")
only_on_root(print_and_save_plot, v, c, plot, print=True)
Types of Parallelism
@@ -252,8 +252,8 @@
for sto, fn in parallel_objects(fns, num_procs, storage = my_storage):
# Open a data file, remembering that fn is different on each task.
- pf = load(fn)
- dd = pf.h.all_data()
+ ds = load(fn)
+ dd = ds.all_data()
# This copies fn and the min/max of density to the local copy of
# my_storage
@@ -261,7 +261,7 @@
sto.result = dd.quantities["Extrema"]("density")
# Makes and saves a plot of the gas density.
- p = ProjectionPlot(pf, "x", "density")
+ p = ProjectionPlot(ds, "x", "density")
p.save()
# At this point, as the loop exits, the local copies of my_storage are
@@ -301,7 +301,7 @@
processor. By default, parallel is set to ``True``, so you do not have to
explicitly set ``parallel = True`` as in the above example.
-One could get the same effect by iterating over the individual parameter files
+One could get the same effect by iterating over the individual datasets
in the DatasetSeries object:
.. code-block:: python
@@ -309,10 +309,10 @@
from yt.pmods import *
ts = DatasetSeries.from_filenames("DD*/output_*", parallel = True)
my_storage = {}
- for sto,pf in ts.piter(storage=my_storage):
- sphere = pf.sphere("max", (1.0, "pc"))
+ for sto,ds in ts.piter(storage=my_storage):
+ sphere = ds.sphere("max", (1.0, "pc"))
L_vec = sphere.quantities["AngularMomentumVector"]()
- sto.result_id = pf.parameter_filename
+ sto.result_id = ds.parameter_filename
sto.result = L_vec
L_vecs = []
@@ -503,14 +503,14 @@
from yt.mods import *
import time
- pf = load("DD0152")
+ ds = load("DD0152")
t0 = time.time()
- bigstuff, hugestuff = StuffFinder(pf)
- BigHugeStuffParallelFunction(pf, bigstuff, hugestuff)
+ bigstuff, hugestuff = StuffFinder(ds)
+ BigHugeStuffParallelFunction(ds, bigstuff, hugestuff)
t1 = time.time()
for i in range(1000000):
tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i)
- array = TinyTeensyParallelFunction(pf, tinystuff, ministuff)
+ array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
t2 = time.time()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/particles.rst
--- a/doc/source/analyzing/particles.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/particles.rst Sun Jun 15 19:50:51 2014 -0700
@@ -63,8 +63,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("galaxy1200.dir/galaxy1200")
- dd = pf.h.all_data()
+ ds = load("galaxy1200.dir/galaxy1200")
+ dd = ds.all_data()
star_particles = dd["creation_time"] > 0.0
print dd["ParticleMassMsun"][star_particles].max()
@@ -80,8 +80,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("galaxy1200.dir/galaxy1200")
- dd = pf.h.all_data()
+ ds = load("galaxy1200.dir/galaxy1200")
+ dd = ds.all_data()
star_particles = dd["particle_type"] == 2
print dd["ParticleMassMsun"][star_particles].max()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/time_series_analysis.rst Sun Jun 15 19:50:51 2014 -0700
@@ -11,10 +11,10 @@
.. code-block:: python
- for pfi in range(30):
- fn = "DD%04i/DD%04i" % (pfi, pfi)
- pf = load(fn)
- process_output(pf)
+ for dsi in range(30):
+ fn = "DD%04i/DD%04i" % (dsi, dsi)
+ ds = load(fn)
+ process_output(ds)
But this is not really very nice. This ends up requiring a lot of maintenance.
The :class:`~yt.data_objects.time_series.DatasetSeries` object has been
@@ -66,8 +66,8 @@
from yt.mods import *
ts = DatasetSeries.from_filenames("*/*.index")
- for pf in ts:
- print pf.current_time
+ for ds in ts:
+ print ds.current_time
This can also operate in parallel, using
:meth:`~yt.data_objects.time_series.DatasetSeries.piter`. For more examples,
@@ -101,7 +101,7 @@
max_rho = ts.tasks["MaximumValue"]("density")
When we call the task, the time series object executes the task on each
-component parameter file. The results are then returned to the user. More
+component dataset. The results are then returned to the user. More
complex, multi-task evaluations can be conducted by using the
:meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
list of analysis tasks.
@@ -140,14 +140,14 @@
~~~~~~~~~~~~~~~~~~~~~~~
If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and pf and then decorate it with
+would write a function that accepts params and ds and then decorate it with
analysis_task. Here we have done so:
.. code-block:: python
@analysis_task(('particle_type',))
- def MassInParticleType(params, pf):
- dd = pf.h.all_data()
+ def MassInParticleType(params, ds):
+ dd = ds.all_data()
ptype = (dd["particle_type"] == params.particle_type)
return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
@@ -196,8 +196,8 @@
.. code-block:: python
- for pf in my_sim.piter()
- all_data = pf.h.all_data()
+ for ds in my_sim.piter()
+ all_data = ds.all_data()
print all_data.quantities['Extrema']('density')
Additional keywords can be given to :meth:`get_time_series` to select a subset
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/analyzing/units/data_selection_and_fields.rst
--- a/doc/source/analyzing/units/data_selection_and_fields.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/analyzing/units/data_selection_and_fields.rst Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.h.all_data()
+ dd = ds.all_data()
dd['root_cell_volume']
No special unit logic needs to happen inside of the function - `np.sqrt` will
@@ -47,7 +47,7 @@
import numpy as np
ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.h.all_data()
+ dd = ds.all_data()
print dd['cell_volume'].in_cgs()
print np.sqrt(dd['cell_volume'].in_cgs())
@@ -70,5 +70,5 @@
ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.h.all_data()
+ dd = ds.all_data()
dd['root_cell_volume']
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
--- a/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -57,7 +57,7 @@
"source": [
"### Example 1: Simple Time Series\n",
"\n",
- "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation. To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up. For each parameter file, we'll create an object (`dd`) that covers the entire domain. (`all_data` is a shorthand function for this.) We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
+ "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation. To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up. For each dataset, we'll create an object (`dd`) that covers the entire domain. (`all_data` is a shorthand function for this.) We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
]
},
{
@@ -102,7 +102,7 @@
"\n",
"Let's do something a bit different. Let's calculate the total mass inside halos and outside halos.\n",
"\n",
- "This actually touches a lot of different pieces of machinery in yt. For every parameter file, we will run the halo finder HOP. Then, we calculate the total mass in the domain. Then, for each halo, we calculate the sum of the baryon mass in that halo. We'll keep running tallies of these two things."
+ "This actually touches a lot of different pieces of machinery in yt. For every dataset, we will run the halo finder HOP. Then, we calculate the total mass in the domain. Then, for each halo, we calculate the sum of the baryon mass in that halo. We'll keep running tallies of these two things."
]
},
{
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/amrkdtree_downsampling.py
--- a/doc/source/cookbook/amrkdtree_downsampling.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/amrkdtree_downsampling.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,7 +20,7 @@
print kd.count_cells()
tf = yt.ColorTransferFunction((-30, -22))
-cam = ds.h.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
+cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
tf, volume=kd)
tf.add_layers(4, 0.01, col_bounds=[-27.5, -25.5], colormap='RdBu_r')
cam.snapshot("v1.png", clip_ratio=6.0)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/amrkdtree_to_uniformgrid.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
#determine the cellsize in the highest refinement level
-cell_size = pf.domain_width/(pf.domain_dimensions*2**lmax)
+cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
#calculate the left edge of the new grid
left_edge = domain_center - 512*cell_size
@@ -24,7 +24,7 @@
ncells = 1024
#ask yt for the specified covering grid
-cgrid = pf.h.covering_grid(lmax, left_edge, np.array([ncells,]*3))
+cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
#get a map of the density into the new grid
density_map = cgrid["density"].astype(dtype="float32")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/average_value.py
--- a/doc/source/cookbook/average_value.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/average_value.py Sun Jun 15 19:50:51 2014 -0700
@@ -5,7 +5,7 @@
field = "temperature" # The field to average
weight = "cell_mass" # The weight for the average
-dd = ds.h.all_data() # This is a region describing the entire box,
+dd = ds.all_data() # This is a region describing the entire box,
# but note it doesn't read anything in yet!
# We now use our 'quantities' call to get the average quantity
average_value = dd.quantities["WeightedAverageQuantity"](field, weight)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/contours_on_slice.py
--- a/doc/source/cookbook/contours_on_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/contours_on_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,13 +1,13 @@
import yt
# first add density contours on a density slice
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # load data
-p = yt.SlicePlot(pf, "x", "density")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # load data
+p = yt.SlicePlot(ds, "x", "density")
p.annotate_contour("density")
p.save()
# then add temperature contours on the same densty slice
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # load data
-p = yt.SlicePlot(pf, "x", "density")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") # load data
+p = yt.SlicePlot(ds, "x", "density")
p.annotate_contour("temperature")
-p.save(str(pf)+'_T_contour')
+p.save(str(ds)+'_T_contour')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -22,8 +22,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "pf = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
- "slc = SlicePlot(pf, 'x', 'density')\n",
+ "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "slc = SlicePlot(ds, 'x', 'density')\n",
"slc"
],
"language": "python",
@@ -87,4 +87,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -51,12 +51,11 @@
"prj.set_figure_size(5)\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
- "fig.canvas = FigureCanvasAgg(fig)\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
- " prj._switch_pf(pf)\n",
+ " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
"animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)"
@@ -69,4 +68,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -99,12 +99,11 @@
"prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
- "fig.canvas = FigureCanvasAgg(fig)\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
- " prj._switch_pf(pf)\n",
+ " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
"anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
@@ -120,4 +119,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/find_clumps.py Sun Jun 15 19:50:51 2014 -0700
@@ -4,7 +4,7 @@
from yt.analysis_modules.level_sets.api import (Clump, find_clumps,
get_lowest_clumps)
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # parameter file to load
+fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # dataset to load
# this is the field we look for contours over -- we could do
# this over anything. Other common choices are 'AveragedDensity'
# and 'Dark_Matter_Density'.
@@ -66,7 +66,7 @@
# We can also save the clump object to disk to read in later so we don't have
# to spend a lot of time regenerating the clump objects.
-ds.h.save_object(master_clump, 'My_clumps')
+ds.save_object(master_clump, 'My_clumps')
# Later, we can read in the clump object like so,
master_clump = ds.load_object('My_clumps')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/free_free_field.py Sun Jun 15 19:50:51 2014 -0700
@@ -70,9 +70,9 @@
yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
combine_function=_combFreeFreeLuminosity, n_ret=1)
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
-sphere = pf.sphere(pf.domain_center, (100., "kpc"))
+sphere = ds.sphere(ds.domain_center, (100., "kpc"))
# Print out the total luminosity at 1 keV for the sphere
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/global_phase_plots.py
--- a/doc/source/cookbook/global_phase_plots.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/global_phase_plots.py Sun Jun 15 19:50:51 2014 -0700
@@ -4,7 +4,7 @@
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# This is an object that describes the entire box
-ad = ds.h.all_data()
+ad = ds.all_data()
# We plot the average VelocityMagnitude (mass-weighted) in our object
# as a function of Density and temperature
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/halo_merger_tree.py
--- a/doc/source/cookbook/halo_merger_tree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/halo_merger_tree.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,9 +25,9 @@
# DEPENDING ON THE SIZE OF YOUR FILES, THIS CAN BE A LONG STEP
# but because we're writing them out to disk, you only have to do this once.
# ------------------------------------------------------------
-for pf in ts:
- halo_list = FOFHaloFinder(pf)
- i = int(pf.basename[2:])
+for ds in ts:
+ halo_list = FOFHaloFinder(ds)
+ i = int(ds.basename[2:])
halo_list.write_out("FOF/groups_%05i.txt" % i)
halo_list.write_particle_lists("FOF/particles_%05i" % i)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/halo_plotting.py
--- a/doc/source/cookbook/halo_plotting.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/halo_plotting.py Sun Jun 15 19:50:51 2014 -0700
@@ -4,13 +4,13 @@
"""
from yt.mods import * # set up our namespace
-data_pf = load("Enzo_64/RD0006/RedshiftOutput0006")
+data_ds = load("Enzo_64/RD0006/RedshiftOutput0006")
-halo_pf = load('rockstar_halos/halos_0.0.bin')
+halo_ds = load('rockstar_halos/halos_0.0.bin')
-hc - HaloCatalog(halos_pf = halo_pf)
+hc - HaloCatalog(halos_ds = halo_ds)
hc.load()
-p = ProjectionPlot(pf, "x", "density")
+p = ProjectionPlot(ds, "x", "density")
p.annotate_halos(hc)
p.save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multi_plot_3x2_FRB.py
--- a/doc/source/cookbook/multi_plot_3x2_FRB.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multi_plot_3x2_FRB.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,11 +2,11 @@
import matplotlib.colorbar as cb
from matplotlib.colors import LogNorm
-fn = "Enzo_64/RD0006/RedshiftOutput0006" # parameter file to load
+fn = "Enzo_64/RD0006/RedshiftOutput0006" # dataset to load
-pf = load(fn) # load data
-v, c = pf.h.find_max("density")
+ds = load(fn) # load data
+v, c = ds.find_max("density")
# set up our Fixed Resolution Buffer parameters: a width, resolution, and center
width = (1.0, 'unitary')
@@ -28,7 +28,7 @@
# over the columns, which will become axes of slicing.
plots = []
for ax in range(3):
- sli = pf.slice(ax, c[ax])
+ sli = ds.slice(ax, c[ax])
frb = sli.to_frb(width, res)
den_axis = axes[ax][0]
temp_axis = axes[ax][1]
@@ -60,4 +60,4 @@
cbar.set_label(t)
# And now we're done!
-fig.savefig("%s_3x2.png" % pf)
+fig.savefig("%s_3x2.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multi_plot_slice_and_proj.py
--- a/doc/source/cookbook/multi_plot_slice_and_proj.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multi_plot_slice_and_proj.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,10 +3,10 @@
import matplotlib.colorbar as cb
from matplotlib.colors import LogNorm
-fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # parameter file to load
+fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # dataset to load
orient = 'horizontal'
-pf = load(fn) # load data
+ds = load(fn) # load data
# There's a lot in here:
# From this we get a containing figure, a list-of-lists of axes into which we
@@ -17,9 +17,9 @@
# bw is the base-width in inches, but 4 is about right for most cases.
fig, axes, colorbars = get_multi_plot(3, 2, colorbar=orient, bw = 4)
-slc = pf.slice(2, 0.0, fields=["density","temperature","velocity_magnitude"],
- center=pf.domain_center)
-proj = pf.proj("density", 2, weight_field="density", center=pf.domain_center)
+slc = ds.slice(2, 0.0, fields=["density","temperature","velocity_magnitude"],
+ center=ds.domain_center)
+proj = ds.proj("density", 2, weight_field="density", center=ds.domain_center)
slc_frb = slc.to_frb((1.0, "mpc"), 512)
proj_frb = proj.to_frb((1.0, "mpc"), 512)
@@ -66,4 +66,4 @@
cbar.set_label(t)
# And now we're done!
-fig.savefig("%s_3x2" % pf)
+fig.savefig("%s_3x2" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multi_width_image.py
--- a/doc/source/cookbook/multi_width_image.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multi_width_image.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,12 +1,12 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a slice plot for the dataset. With no additional arguments,
# the width will be the size of the domain and the center will be the
# center of the simulation box
-slc = SlicePlot(pf,2,'density')
+slc = SlicePlot(ds,2,'density')
# Create a list of a couple of widths and units.
widths = [(1, 'mpc'),
@@ -19,12 +19,12 @@
slc.set_width(width, unit)
# Write out the image with a unique name.
- slc.save("%s_%010d_%s" % (pf, width, unit))
+ slc.save("%s_%010d_%s" % (ds, width, unit))
zoomFactors = [2,4,5]
# recreate the original slice
-slc = SlicePlot(pf,2,'density')
+slc = SlicePlot(ds,2,'density')
for zoomFactor in zoomFactors:
@@ -32,4 +32,4 @@
slc.zoom(zoomFactor)
# Write out the image with a unique name.
- slc.save("%s_%i" % (pf, zoomFactor))
+ slc.save("%s_%i" % (ds, zoomFactor))
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multiplot_2x2.py
--- a/doc/source/cookbook/multiplot_2x2.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multiplot_2x2.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,7 +3,7 @@
from mpl_toolkits.axes_grid1 import AxesGrid
fn = "IsolatedGalaxy/galaxy0030/galaxy0030"
-pf = load(fn) # load data
+ds = load(fn) # load data
fig = plt.figure()
@@ -26,7 +26,7 @@
# Create the plot. Since SlicePlot accepts a list of fields, we need only
# do this once.
-p = SlicePlot(pf, 'z', fields)
+p = SlicePlot(ds, 'z', fields)
p.zoom(2)
# For each plotted field, force the SlicePlot to redraw itself onto the AxesGrid
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multiplot_2x2_coordaxes_slice.py
--- a/doc/source/cookbook/multiplot_2x2_coordaxes_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multiplot_2x2_coordaxes_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,7 +3,7 @@
from mpl_toolkits.axes_grid1 import AxesGrid
fn = "IsolatedGalaxy/galaxy0030/galaxy0030"
-pf = load(fn) # load data
+ds = load(fn) # load data
fig = plt.figure()
@@ -27,7 +27,7 @@
for i, (direction, field) in enumerate(zip(cuts, fields)):
# Load the data and create a single plot
- p = SlicePlot(pf, direction, field)
+ p = SlicePlot(ds, direction, field)
p.zoom(40)
# This forces the ProjectionPlot to redraw itself on the AxesGrid axes.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/multiplot_2x2_time_series.py
--- a/doc/source/cookbook/multiplot_2x2_time_series.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/multiplot_2x2_time_series.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,8 +23,8 @@
for i, fn in enumerate(fns):
# Load the data and create a single plot
- pf = load(fn) # load data
- p = ProjectionPlot(pf, 'z', 'density', width=(55, 'Mpccm'))
+ ds = load(fn) # load data
+ p = ProjectionPlot(ds, 'z', 'density', width=(55, 'Mpccm'))
# Ensure the colorbar limits match for all plots
p.set_zlim('density', 1e-4, 1e-2)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/offaxis_projection.py
--- a/doc/source/cookbook/offaxis_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/offaxis_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,7 +1,7 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Choose a center for the render.
c = [0.5, 0.5, 0.5]
@@ -25,10 +25,10 @@
# Create the off axis projection.
# Setting no_ghost to False speeds up the process, but makes a
# slighly lower quality image.
-image = off_axis_projection(pf, c, L, W, Npixels, "density", no_ghost=False)
+image = off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
# Write out the final image and give it a name
# relating to what our dataset is called.
# We save the log of the values so that the colors do not span
# many orders of magnitude. Try it without and see what happens.
-write_image(np.log10(image), "%s_offaxis_projection.png" % pf)
+write_image(np.log10(image), "%s_offaxis_projection.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/offaxis_projection_colorbar.py
--- a/doc/source/cookbook/offaxis_projection_colorbar.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/offaxis_projection_colorbar.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,8 +1,8 @@
from yt.mods import * # set up our namespace
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # parameter file to load
+fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # dataset to load
-pf = load(fn) # load data
+ds = load(fn) # load data
# Now we need a center of our volume to render. Here we'll just use
# 0.5,0.5,0.5, because volume renderings are not periodic.
@@ -31,7 +31,7 @@
# Also note that we set the field which we want to project as "density", but
# really we could use any arbitrary field like "temperature", "metallicity"
# or whatever.
-image = off_axis_projection(pf, c, L, W, Npixels, "density", no_ghost=False)
+image = off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
# Image is now an NxN array representing the intensities of the various pixels.
# And now, we call our direct image saver. We save the log of the result.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/opaque_rendering.py
--- a/doc/source/cookbook/opaque_rendering.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/opaque_rendering.py Sun Jun 15 19:50:51 2014 -0700
@@ -9,12 +9,12 @@
from yt.mods import *
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# We start by building a transfer function, and initializing a camera.
tf = ColorTransferFunction((-30, -22))
-cam = pf.h.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256, tf)
+cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256, tf)
# Now let's add some isocontours, and take a snapshot.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/overplot_grids.py
--- a/doc/source/cookbook/overplot_grids.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/overplot_grids.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0043/data0043")
+ds = load("Enzo_64/DD0043/data0043")
# Make a density projection.
-p = ProjectionPlot(pf, "y", "density")
+p = ProjectionPlot(ds, "y", "density")
# Modify the projection
# The argument specifies the region along the line of sight
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/overplot_particles.py
--- a/doc/source/cookbook/overplot_particles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/overplot_particles.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0043/data0043")
+ds = load("Enzo_64/DD0043/data0043")
# Make a density projection.
-p = ProjectionPlot(pf, "y", "density")
+p = ProjectionPlot(ds, "y", "density")
# Modify the projection
# The argument specifies the region along the line of sight
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/profile_with_variance.py
--- a/doc/source/cookbook/profile_with_variance.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/profile_with_variance.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,10 +3,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a sphere of radius 1000 kpc centered on the max density.
-sphere = pf.sphere("max", (1000, "kpc"))
+sphere = ds.sphere("max", (1000, "kpc"))
# Calculate and store the bulk velocity for the sphere.
bulk_velocity = sphere.quantities['BulkVelocity']()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/rad_velocity.py
--- a/doc/source/cookbook/rad_velocity.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/rad_velocity.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,11 +1,11 @@
from yt.mods import *
import matplotlib.pyplot as plt
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Get the first sphere
-sphere0 = pf.sphere(pf.domain_center, (500., "kpc"))
+sphere0 = ds.sphere(ds.domain_center, (500., "kpc"))
# Compute the bulk velocity from the cells in this sphere
@@ -13,7 +13,7 @@
# Get the second sphere
-sphere1 = pf.sphere(pf.domain_center, (500., "kpc"))
+sphere1 = ds.sphere(ds.domain_center, (500., "kpc"))
# Set the bulk velocity field parameter
sphere1.set_field_parameter("bulk_velocity", bulk_vel)
@@ -41,4 +41,4 @@
ax.set_ylabel(r"$\mathrm{v_r\ (km/s)}$")
ax.legend(["Without Correction", "With Correction"])
-fig.savefig("%s_profiles.png" % pf)
\ No newline at end of file
+fig.savefig("%s_profiles.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/radial_profile_styles.py
--- a/doc/source/cookbook/radial_profile_styles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/radial_profile_styles.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,11 +1,11 @@
from yt.mods import *
import matplotlib.pyplot as plt
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Get a sphere object
-sphere = pf.sphere(pf.domain_center, (500., "kpc"))
+sphere = ds.sphere(ds.domain_center, (500., "kpc"))
# Bin up the data from the sphere into a radial profile
@@ -27,7 +27,7 @@
# Save the default plot
-fig.savefig("density_profile_default.png" % pf)
+fig.savefig("density_profile_default.png" % ds)
# The "dens_plot" object is a list of plot objects. In our case we only have one,
# so we index the list by '0' to get it.
@@ -57,4 +57,4 @@
dens_err_plot = ax.errorbar(rad_profile["Radiuskpc"], rad_profile["density"],
yerr=rad_profile["Density_std"])
-fig.savefig("density_profile_with_errorbars.png")
\ No newline at end of file
+fig.savefig("density_profile_with_errorbars.png")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/rendering_with_box_and_grids.py
--- a/doc/source/cookbook/rendering_with_box_and_grids.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/rendering_with_box_and_grids.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,11 +1,11 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0043/data0043")
+ds = load("Enzo_64/DD0043/data0043")
# Create a data container (like a sphere or region) that
# represents the entire domain.
-dd = pf.h.all_data()
+dd = ds.all_data()
# Get the minimum and maximum densities.
mi, ma = dd.quantities["Extrema"]("density")[0]
@@ -37,25 +37,25 @@
# Create a camera object.
# This object creates the images and
# can be moved and rotated.
-cam = pf.h.camera(c, L, W, Npixels, tf)
+cam = ds.camera(c, L, W, Npixels, tf)
# Create a snapshot.
# The return value of this function could also be accepted, modified (or saved
# for later manipulation) and then put written out using write_bitmap.
# clip_ratio applies a maximum to the function, which is set to that value
# times the .std() of the array.
-im = cam.snapshot("%s_volume_rendered.png" % pf, clip_ratio=8.0)
+im = cam.snapshot("%s_volume_rendered.png" % ds, clip_ratio=8.0)
# Add the domain edges, with an alpha blending of 0.3:
nim = cam.draw_domain(im, alpha=0.3)
-nim.write_png('%s_vr_domain.png' % pf)
+nim.write_png('%s_vr_domain.png' % ds)
# Add the grids, colored by the grid level with the algae colormap
nim = cam.draw_grids(im, alpha=0.3, cmap='algae')
-nim.write_png('%s_vr_grids.png' % pf)
+nim.write_png('%s_vr_grids.png' % ds)
# Here we can draw the coordinate vectors on top of the image by processing
# it through the camera. Then save it out.
cam.draw_coordinate_vectors(nim)
-nim.write_png("%s_vr_vectors.png" % pf)
+nim.write_png("%s_vr_vectors.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/save_profiles.py
--- a/doc/source/cookbook/save_profiles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/save_profiles.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,11 +2,11 @@
import matplotlib.pyplot as plt
import h5py
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Get a sphere
-sp = pf.sphere(pf.domain_center, (500., "kpc"))
+sp = ds.sphere(ds.domain_center, (500., "kpc"))
# Radial profile from the sphere
@@ -18,11 +18,11 @@
# Write profiles to ASCII file
-rad_profile.write_out("%s_profile.dat" % pf, bin_style="center")
+rad_profile.write_out("%s_profile.dat" % ds, bin_style="center")
# Write profiles to HDF5 file
-rad_profile.write_out_h5("%s_profile.h5" % pf, bin_style="center")
+rad_profile.write_out_h5("%s_profile.h5" % ds, bin_style="center")
# Now we will show how using NumPy, h5py, and Matplotlib the data in these
# files may be plotted.
@@ -42,13 +42,13 @@
ax.set_xlabel(r"$\mathrm{r\ (kpc)}$")
ax.set_ylabel(r"$\mathrm{\rho\ (g\ cm^{-3})}$")
ax.set_title("Density vs. Radius")
-fig1.savefig("%s_dens.png" % pf)
+fig1.savefig("%s_dens.png" % ds)
# Plot temperature from HDF5 file
# Get the file handle
-f = h5py.File("%s_profile.h5" % pf, "r")
+f = h5py.File("%s_profile.h5" % ds, "r")
# Get the radius and temperature arrays from the file handle
@@ -66,4 +66,4 @@
ax.set_xlabel(r"$\mathrm{r\ (kpc)}$")
ax.set_ylabel(r"$\mathrm{T\ (K)}$")
ax.set_title("temperature vs. Radius")
-fig2.savefig("%s_temp.png" % pf)
+fig2.savefig("%s_temp.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/show_hide_axes_colorbar.py
--- a/doc/source/cookbook/show_hide_axes_colorbar.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/show_hide_axes_colorbar.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,8 +1,8 @@
from yt.mods import *
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
-slc = SlicePlot(pf, "x", "density")
+slc = SlicePlot(ds, "x", "density")
slc.save("default_sliceplot.png")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_contour_in_slice.py
--- a/doc/source/cookbook/simple_contour_in_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_contour_in_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the data file.
-pf = load("Sedov_3d/sedov_hdf5_chk_0002")
+ds = load("Sedov_3d/sedov_hdf5_chk_0002")
# Make a traditional slice plot.
-sp = SlicePlot(pf,"x","density")
+sp = SlicePlot(ds,"x","density")
# Overlay the slice plot with thick red contours of density.
sp.annotate_contour("density", ncont=3, clim=(1e-2,1e-1), label=True,
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_off_axis_projection.py
--- a/doc/source/cookbook/simple_off_axis_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_off_axis_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,12 +1,12 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a 1 kpc radius sphere, centered on the max density. Note that this
# sphere is very small compared to the size of our final plot, and it has a
# non-axially aligned L vector.
-sp = pf.sphere("center", (15.0, "kpc"))
+sp = ds.sphere("center", (15.0, "kpc"))
# Get the angular momentum vector for the sphere.
L = sp.quantities["AngularMomentumVector"]()
@@ -14,5 +14,5 @@
print "Angular momentum vector: {0}".format(L)
# Create an OffAxisSlicePlot on the object with the L vector as its normal
-p = OffAxisProjectionPlot(pf, L, "density", sp.center, (25, "kpc"))
+p = OffAxisProjectionPlot(ds, L, "density", sp.center, (25, "kpc"))
p.save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_pdf.py
--- a/doc/source/cookbook/simple_pdf.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_pdf.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
+ds = load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
# Create a data object that represents the whole box.
-ad = pf.h.all_data()
+ad = ds.all_data()
# This is identical to the simple phase plot, except we supply
# the fractional=True keyword to divide the profile data by the sum.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_phase.py
--- a/doc/source/cookbook/simple_phase.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_phase.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a sphere of radius 100 kpc in the center of the domain.
-my_sphere = pf.sphere("c", (100.0, "kpc"))
+my_sphere = ds.sphere("c", (100.0, "kpc"))
# Create a PhasePlot object.
# Setting weight to None will calculate a sum.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_profile.py
--- a/doc/source/cookbook/simple_profile.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_profile.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,12 +1,12 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a 1D profile within a sphere of radius 100 kpc
# of the average temperature and average velocity_x
# vs. density, weighted by mass.
-sphere = pf.sphere("c", (100., "kpc"))
+sphere = ds.sphere("c", (100., "kpc"))
plot = ProfilePlot(sphere, "density", ["temperature", "velocity_x"],
weight_field="cell_mass")
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_projection.py
--- a/doc/source/cookbook/simple_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
+ds = load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
# Create projections of the density-weighted mean density.
-ProjectionPlot(pf, "x", "density", weight_field = "density").save()
-ProjectionPlot(pf, "y", "density", weight_field = "density").save()
-ProjectionPlot(pf, "z", "density", weight_field = "density").save()
+ProjectionPlot(ds, "x", "density", weight_field = "density").save()
+ProjectionPlot(ds, "y", "density", weight_field = "density").save()
+ProjectionPlot(ds, "z", "density", weight_field = "density").save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_radial_profile.py
--- a/doc/source/cookbook/simple_radial_profile.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_radial_profile.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Create a sphere of radius 100 kpc in the center of the box.
-my_sphere = pf.sphere("c", (100.0, "kpc"))
+my_sphere = ds.sphere("c", (100.0, "kpc"))
# Create a profile of the average density vs. radius.
plot = ProfilePlot(my_sphere, "Radiuskpc", "density",
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_slice.py
--- a/doc/source/cookbook/simple_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,9 +1,9 @@
from yt.mods import *
# Load the dataset.
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Create density slices in all three axes.
-SlicePlot(pf, 'x', "density", width = (800.0, 'kpc')).save()
-SlicePlot(pf, 'y', "density", width = (800.0, 'kpc')).save()
-SlicePlot(pf, 'z', "density", width = (800.0, 'kpc')).save()
+SlicePlot(ds, 'x', "density", width = (800.0, 'kpc')).save()
+SlicePlot(ds, 'y', "density", width = (800.0, 'kpc')).save()
+SlicePlot(ds, 'z', "density", width = (800.0, 'kpc')).save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_slice_matplotlib_example.py
--- a/doc/source/cookbook/simple_slice_matplotlib_example.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_slice_matplotlib_example.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Create a slice object
-slc = SlicePlot(pf,'x','density',width=(800.0,'kpc'))
+slc = SlicePlot(ds,'x','density',width=(800.0,'kpc'))
# Get a reference to the matplotlib axes object for the plot
ax = slc.plots['density'].axes
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_slice_with_multiple_fields.py
--- a/doc/source/cookbook/simple_slice_with_multiple_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_slice_with_multiple_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,8 +1,8 @@
from yt.mods import *
# Load the dataset
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
# Create density slices of several fields along the x axis
-SlicePlot(pf, 'x', ['density','temperature','pressure','vorticity_squared'],
+SlicePlot(ds, 'x', ['density','temperature','pressure','vorticity_squared'],
width = (800.0, 'kpc')).save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simple_volume_rendering.py
--- a/doc/source/cookbook/simple_volume_rendering.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simple_volume_rendering.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,11 +1,11 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0043/data0043")
+ds = load("Enzo_64/DD0043/data0043")
# Create a data container (like a sphere or region) that
# represents the entire domain.
-dd = pf.h.all_data()
+dd = ds.all_data()
# Get the minimum and maximum densities.
mi, ma = dd.quantities["Extrema"]("density")[0]
@@ -37,11 +37,11 @@
# Create a camera object.
# This object creates the images and
# can be moved and rotated.
-cam = pf.h.camera(c, L, W, Npixels, tf)
+cam = ds.camera(c, L, W, Npixels, tf)
# Create a snapshot.
# The return value of this function could also be accepted, modified (or saved
# for later manipulation) and then put written out using write_bitmap.
# clip_ratio applies a maximum to the function, which is set to that value
# times the .std() of the array.
-cam.snapshot("%s_volume_rendered.png" % pf, clip_ratio=8.0)
+cam.snapshot("%s_volume_rendered.png" % ds, clip_ratio=8.0)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/simulation_analysis.py
--- a/doc/source/cookbook/simulation_analysis.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/simulation_analysis.py Sun Jun 15 19:50:51 2014 -0700
@@ -8,8 +8,8 @@
# Calculate and store extrema for all datasets.
all_storage = {}
-for my_storage, pf in my_sim.piter(storage=all_storage):
- all_data = pf.h.all_data()
+for my_storage, ds in my_sim.piter(storage=all_storage):
+ all_data = ds.all_data()
my_extrema = all_data.quantities['Extrema']('density')
# Save to storage so we can get at it afterward.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/streamlines.py
--- a/doc/source/cookbook/streamlines.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/streamlines.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,14 +1,14 @@
from yt.mods import *
from yt.visualization.api import Streamlines
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
c = np.array([0.5]*3)
N = 100
scale = 1.0
pos_dx = np.random.random((N,3))*scale-scale/2.
pos = c+pos_dx
-streamlines = Streamlines(pf,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
+streamlines = Streamlines(ds,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
streamlines.integrate_through_volume()
import matplotlib.pylab as pl
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/streamlines_isocontour.py
--- a/doc/source/cookbook/streamlines_isocontour.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/streamlines_isocontour.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,14 +1,14 @@
from yt.mods import *
from yt.visualization.api import Streamlines
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
c = np.array([0.5]*3)
N = 30
-scale = 15.0/pf['kpc']
+scale = 15.0/ds['kpc']
pos_dx = np.random.random((N,3))*scale-scale/2.
pos = c+pos_dx
-streamlines = Streamlines(pf,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
+streamlines = Streamlines(ds,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
streamlines.integrate_through_volume()
import matplotlib.pylab as pl
@@ -21,8 +21,8 @@
ax.plot3D(stream[:,0], stream[:,1], stream[:,2], alpha=0.1)
-sphere = pf.sphere("max", (1.0, "mpc"))
-surface = pf.surface(sphere, "density", 1e-24)
+sphere = ds.sphere("max", (1.0, "mpc"))
+surface = ds.surface(sphere, "density", 1e-24)
colors = apply_colormap(np.log10(surface["temperature"]), cmap_name="hot")
p3dc = Poly3DCollection(surface.triangles, linewidth=0.0)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/sum_mass_in_sphere.py
--- a/doc/source/cookbook/sum_mass_in_sphere.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/sum_mass_in_sphere.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,10 +1,10 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0029/data0029")
+ds = load("Enzo_64/DD0029/data0029")
# Create a 1 Mpc radius sphere, centered on the max density.
-sp = pf.sphere("max", (1.0, "mpc"))
+sp = ds.sphere("max", (1.0, "mpc"))
# Use the TotalQuantity derived quantity to sum up the
# values of the cell_mass and particle_mass fields
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/surface_plot.py
--- a/doc/source/cookbook/surface_plot.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/surface_plot.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,9 +3,9 @@
import matplotlib.pyplot as plt
from yt.mods import *
-pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
-sphere = pf.sphere("max", (1.0, "mpc"))
-surface = pf.surface(sphere, "density", 1e-25)
+ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+sphere = ds.sphere("max", (1.0, "mpc"))
+surface = ds.surface(sphere, "density", 1e-25)
colors = apply_colormap(np.log10(surface["temperature"]), cmap_name="hot")
fig = plt.figure()
@@ -17,4 +17,4 @@
ax.auto_scale_xyz(surface.vertices[0,:], surface.vertices[1,:], surface.vertices[2,:])
ax.set_aspect(1.0)
-plt.savefig("%s_Surface.png" % pf)
+plt.savefig("%s_Surface.png" % ds)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/thin_slice_projection.py
--- a/doc/source/cookbook/thin_slice_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/thin_slice_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,7 +1,7 @@
from yt.mods import *
# Load the dataset.
-pf = load("Enzo_64/DD0030/data0030")
+ds = load("Enzo_64/DD0030/data0030")
# Make a projection that is the full width of the domain,
# but only 10 Mpc in depth. This is done by creating a
@@ -9,24 +9,24 @@
# as a data_source for the projection.
# Center on the domain center
-center = pf.domain_center
+center = ds.domain_center
# First make the left and right corner of the region based
# on the full domain.
-left_corner = pf.domain_left_edge
-right_corner = pf.domain_right_edge
+left_corner = ds.domain_left_edge
+right_corner = ds.domain_right_edge
# Now adjust the size of the region along the line of sight (x axis).
-depth = pf.quan(10.0,'Mpc')
+depth = ds.quan(10.0,'Mpc')
left_corner[0] = center[0] - 0.5 * depth
left_corner[0] = center[0] + 0.5 * depth
# Create the region
-region = pf.region(center, left_corner, right_corner)
+region = ds.region(center, left_corner, right_corner)
# Create a density projection and supply the region we have just created.
# Only cells within the region will be included in the projection.
# Try with another data container, like a sphere or disk.
-plot = ProjectionPlot(pf, "x", "density", weight_field="density",
+plot = ProjectionPlot(ds, "x", "density", weight_field="density",
data_source=region)
# Save the image with the keyword.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/time_series.py
--- a/doc/source/cookbook/time_series.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/time_series.py Sun Jun 15 19:50:51 2014 -0700
@@ -18,15 +18,15 @@
storage = {}
# We use the piter() method here so that this can be run in parallel.
-# Alternately, you could just iterate "for pf in ts:" and directly append to
+# Alternately, you could just iterate "for ds in ts:" and directly append to
# times and entrs.
-for sto, pf in ts.piter(storage=storage):
- sphere = pf.sphere("c", (100., "kpc"))
+for sto, ds in ts.piter(storage=storage):
+ sphere = ds.sphere("c", (100., "kpc"))
temp = sphere["temperature"]/keV
dens = sphere["density"]/(m_p*mue)
mgas = sphere["cell_mass"]
entr = (temp*(dens**mtt)*mgas).sum()/mgas.sum()
- sto.result = (pf.current_time, entr)
+ sto.result = (ds.current_time, entr)
times = []
entrs = []
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/time_series_profiles.py
--- a/doc/source/cookbook/time_series_profiles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/time_series_profiles.py Sun Jun 15 19:50:51 2014 -0700
@@ -10,14 +10,14 @@
plot_specs = []
# Loop over each dataset in the time-series.
-for pf in es:
+for ds in es:
# Create a data container to hold the whole dataset.
- ad = pf.h.all_data()
+ ad = ds.all_data()
# Create a 1d profile of density vs. temperature.
profiles.append(create_profile(ad, ["density"],
fields=["temperature"]))
# Add labels and linestyles.
- labels.append("z = %.2f" % pf.current_redshift)
+ labels.append("z = %.2f" % ds.current_redshift)
plot_specs.append(dict(linewidth=2, alpha=0.7))
# Create the profile plot from the list of profiles.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/velocity_vectors_on_slice.py
--- a/doc/source/cookbook/velocity_vectors_on_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/velocity_vectors_on_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,9 +1,9 @@
from yt.mods import *
# Load the dataset.
-pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
-p = SlicePlot(pf, "x", "density")
+p = SlicePlot(ds, "x", "density")
# Draw a velocity vector every 16 pixels.
p.annotate_velocity(factor = 16)
p.save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/cookbook/zoomin_frames.py
--- a/doc/source/cookbook/zoomin_frames.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/cookbook/zoomin_frames.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,16 +1,16 @@
from yt.mods import * # set up our namespace
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # parameter file to load
+fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # dataset to load
n_frames = 5 # This is the number of frames to make -- below, you can see how
# this is used.
min_dx = 40 # This is the minimum size in smallest_dx of our last frame.
# Usually it should be set to something like 400, but for THIS
# dataset, we actually don't have that great of resolution.:w
-pf = load(fn) # load data
+ds = load(fn) # load data
frame_template = "frame_%05i" # Template for frame filenames
-p = SlicePlot(pf, "z", "density") # Add our slice, along z
+p = SlicePlot(ds, "z", "density") # Add our slice, along z
p.annotate_contour("temperature") # We'll contour in temperature
# What we do now is a bit fun. "enumerate" returns a tuple for every item --
@@ -20,7 +20,7 @@
# maximum and the number of items to generate. It returns 10^power of each
# item it generates.
for i,v in enumerate(np.logspace(
- 0, np.log10(pf.index.get_smallest_dx()*min_dx), n_frames)):
+ 0, np.log10(ds.index.get_smallest_dx()*min_dx), n_frames)):
# We set our width as necessary for this frame ...
p.set_width(v, 'unitary')
# ... and we save!
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/developing/developing.rst Sun Jun 15 19:50:51 2014 -0700
@@ -440,7 +440,7 @@
+ Hard-coding parameter names that are the same as those in Enzo. The
following translation table should be of some help. Note that the
parameters are now properties on a Dataset subclass: you access them
- like ``pf.refine_by`` .
+ like ``ds.refine_by`` .
- ``RefineBy `` => `` refine_by``
- ``TopGridRank `` => `` dimensionality``
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/developing/testing.rst Sun Jun 15 19:50:51 2014 -0700
@@ -77,8 +77,8 @@
document, as in some cases they belong to other packages. However, a few come
in handy:
- * :func:`yt.testing.fake_random_pf` provides the ability to create a random
- parameter file, with several fields and divided into several different
+ * :func:`yt.testing.fake_random_ds` provides the ability to create a random
+ dataset, with several fields and divided into several different
grids, that can be operated on.
* :func:`yt.testing.assert_equal` can operate on arrays.
* :func:`yt.testing.assert_almost_equal` can operate on arrays and accepts a
@@ -101,7 +101,7 @@
accept no arguments. These should ``yield`` a set of values of the form
``function``, ``arguments``. For example ``yield assert_equal, 1.0, 1.0``
would evaluate that 1.0 equaled 1.0.
- #. Use ``fake_random_pf`` to test on parameter files, and be sure to test for
+ #. Use ``fake_random_ds`` to test on datasets, and be sure to test for
several combinations of ``nproc``, so that domain decomposition can be
tested as well.
#. Test multiple combinations of options by using the
@@ -209,12 +209,12 @@
class MaximumValue(AnswerTestingTest):
_type_name = "ParentageRelationships"
_attrs = ("field",)
- def __init__(self, pf_fn, field):
- super(MaximumValue, self).__init__(pf_fn)
+ def __init__(self, ds_fn, field):
+ super(MaximumValue, self).__init__(ds_fn)
self.field = field
def run(self):
- v, c = self.pf.h.find_max(self.field)
+ v, c = self.ds.find_max(self.field)
result = np.empty(4, dtype="float64")
result[0] = v
result[1:] = c
@@ -266,7 +266,7 @@
* This routine should test a number of different fields and data objects.
* The test routine itself should be decorated with
- ``@requires_pf(file_name)`` This decorate can accept the argument
+ ``@requires_ds(file_name)`` This decorate can accept the argument
``big_data`` for if this data is too big to run all the time.
* There are ``small_patch_amr`` and ``big_patch_amr`` routines that
@@ -291,7 +291,7 @@
The current version of the gold standard can be found in the variable
``_latest`` inside ``yt/utilities/answer_testing/framework.py`` As of
the time of this writing, it is ``gold007`` Note that the name of the
-suite of results is now disconnected from the parameter file's name, so you
+suite of results is now disconnected from the dataset's name, so you
can upload multiple outputs with the same name and not collide.
To upload answers, you **must** have the package boto installed, and you
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/examining/loading_data.rst Sun Jun 15 19:50:51 2014 -0700
@@ -13,7 +13,7 @@
Enzo data is fully supported and cared for by Matthew Turk. To load an Enzo
dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name. This would be the name of the output file, and it
+it the dataset name. This would be the name of the output file, and it
contains no extension. For instance, if you have the following files:
.. code-block:: none
@@ -32,7 +32,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("DD0010/data0010")
+ ds = load("DD0010/data0010")
.. rubric:: Caveats
@@ -79,7 +79,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("pltgmlcs5600")
+ ds = load("pltgmlcs5600")
.. _loading-flash-data:
@@ -102,7 +102,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("cosmoSim_coolhdf5_chk_0026")
+ ds = load("cosmoSim_coolhdf5_chk_0026")
If you have a FLASH particle file that was created at the same time as
a plotfile or checkpoint file (therefore having particle data
@@ -112,7 +112,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+ ds = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
.. rubric:: Caveats
@@ -143,7 +143,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("output_00007/info_00007.txt")
+ ds = load("output_00007/info_00007.txt")
yt will attempt to guess the fields in the file. You may also specify a list
of fields by supplying the ``fields`` keyword in your call to ``load``.
@@ -163,7 +163,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("snapshot_061.hdf5")
+ ds = load("snapshot_061.hdf5")
However, yt cannot detect raw-binary Gadget data, and so you must specify the
format as being Gadget:
@@ -171,7 +171,7 @@
.. code-block:: python
from yt.mods import *
- pf = GadgetDataset("snapshot_061")
+ ds = GadgetDataset("snapshot_061")
.. _particle-bbox:
@@ -194,7 +194,7 @@
.. code-block:: python
- pf = GadgetDataset("snap_004",
+ ds = GadgetDataset("snap_004",
unit_base = {'length': ('kpc', 1.0)},
bounding_box = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]])
@@ -318,7 +318,7 @@
('NallHW', 6, 'i'),
('unused', 16, 'i'))
-These items will all be accessible inside the object ``pf.parameters``, which
+These items will all be accessible inside the object ``ds.parameters``, which
is a dictionary. You can add combinations of new items, specified in the same
way, or alternately other types of headers. The other string keys defined are
``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
@@ -348,7 +348,7 @@
If you are running a cosmology simulation, yt will be able to guess the units
with some reliability. However, if you are not and you do not specify a
-parameter file, yt will not be able to and will use the defaults of length
+dataset, yt will not be able to and will use the defaults of length
being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
Msun/h. You can specify alternate units by supplying the ``unit_base`` keyword
argument of this form:
@@ -462,7 +462,7 @@
from yt.mods import *
- pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
+ ds = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
.. _loading_athena_data:
@@ -480,7 +480,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("kh.0010.vtk")
+ ds = load("kh.0010.vtk")
The filename corresponds to the file on SMR level 0, whereas if there
are multiple levels the corresponding files will be picked up
@@ -495,7 +495,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("id0/kh.0010.vtk")
+ ds = load("id0/kh.0010.vtk")
which will pick up all of the files in the different ``id*`` directories for
the entire dataset.
@@ -507,7 +507,7 @@
.. code-block:: python
from yt.mods import *
- pf = load("id0/cluster_merger.0250.vtk",
+ ds = load("id0/cluster_merger.0250.vtk",
parameters={"length_unit":(1.0,"Mpc"),
"time_unit"(1.0,"Myr"),
"mass_unit":(1.0e14,"Msun")})
@@ -810,9 +810,9 @@
data = dict(Density = arr)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
- pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+ ds = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-will create ``yt``-native parameter file ``pf`` that will treat your array as
+will create ``yt``-native dataset ``ds`` that will treat your array as
density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
simultaneously divide the domain into 12 chunks, so that you can take advantage
of the underlying parallelism.
@@ -832,7 +832,7 @@
particle_position_y = posy_arr,
particle_position_z = posz_arr)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
- pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+ ds = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero.
@@ -848,7 +848,7 @@
Generic AMR Data
----------------
-It is possible to create native ``yt`` parameter file from Python's dictionary
+It is possible to create native ``yt`` dataset from Python's dictionary
that describes set of rectangular patches of data of possibly varying
resolution.
@@ -872,7 +872,7 @@
for g in grid_data:
g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
- pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+ ds = load_amr_grids(grid_data, [32, 32, 32], 1.0)
Particle fields are supported by adding 1-dimensional arrays and
setting the ``number_of_particles`` key to each ``grid``'s dict:
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/examining/low_level_inspection.rst Sun Jun 15 19:50:51 2014 -0700
@@ -42,13 +42,13 @@
multiple a field by this attribute.
* ``child_indices``: a mask of booleans, where False indicates no finer data
is available. This is essentially the inverse of ``child_mask``.
- * ``child_index_mask``: a mask of indices into the ``pf.index.grids`` array of the
+ * ``child_index_mask``: a mask of indices into the ``ds.index.grids`` array of the
child grids.
* ``LeftEdge``: the left edge, in native code coordinates, of this grid
* ``RightEdge``: the right edge, in native code coordinates, of this grid
* ``dds``: the width of a cell in this grid
* ``id``: the id (not necessarily the index) of this grid. Defined such that
- subtracting the property ``_id_offset`` gives the index into ``pf.index.grids``.
+ subtracting the property ``_id_offset`` gives the index into ``ds.index.grids``.
* ``NumberOfParticles``: the number of particles in this grid
* ``OverlappingSiblings``: a list of sibling grids that this grid overlaps
with. Likely only defined for Octree-based codes.
@@ -64,7 +64,7 @@
.. code-block:: python
- g = pf.index.grids[1043]
+ g = ds.index.grids[1043]
g2 = g.Children[1].Children[0]
print g2.LeftEdge
@@ -84,7 +84,7 @@
.. code-block:: python
- g = pf.index.grids[1043]
+ g = ds.index.grids[1043]
print g["density"]
print g["density"].min()
@@ -93,8 +93,8 @@
.. code-block:: python
- g = pf.index.grids[1043]
- rho = pf.h.io.pop(g, "density")
+ g = ds.index.grids[1043]
+ rho = ds.index.io.pop(g, "density")
This field will be the raw data found in the file.
@@ -117,7 +117,7 @@
.. code-block:: python
- gs, gi = pf.h.find_point((0.5, 0.6, 0.9))
+ gs, gi = ds.find_point((0.5, 0.6, 0.9))
for g in gs:
print g.Level, g.LeftEdge, g.RightEdge
@@ -143,8 +143,8 @@
.. code-block:: python
from yt.mods import *
- pf = load('Enzo_64/DD0043/data0043')
- all_data_level_0 = pf.covering_grid(level=0, left_edge=[0,0.0,0.0],
+ ds = load('Enzo_64/DD0043/data0043')
+ all_data_level_0 = ds.covering_grid(level=0, left_edge=[0,0.0,0.0],
dims=[64, 64, 64])
Note that we can also get the same result and rely on the dataset to know
@@ -152,8 +152,8 @@
.. code-block:: python
- all_data_level_0 = pf.covering_grid(level=0, left_edge=[0,0.0,0.0],
- dims=pf.domain_dimensions)
+ all_data_level_0 = ds.covering_grid(level=0, left_edge=[0,0.0,0.0],
+ dims=ds.domain_dimensions)
We can now access our underlying data at the lowest level by specifying what
:ref:`field <field-list>` we want to examine:
@@ -184,8 +184,8 @@
.. code-block:: python
- all_data_level_2 = pf.covering_grid(level=2, left_edge=[0,0.0,0.0],
- dims=pf.domain_dimensions * 2**2)
+ all_data_level_2 = ds.covering_grid(level=2, left_edge=[0,0.0,0.0],
+ dims=ds.domain_dimensions * 2**2)
And let's see what's the density in the central location:
@@ -209,8 +209,8 @@
.. code-block:: python
- all_data_level_2_s = pf.smoothed_covering_grid(2, [0.0, 0.0, 0.0],
- pf.domain_dimensions * 2**2)
+ all_data_level_2_s = ds.smoothed_covering_grid(2, [0.0, 0.0, 0.0],
+ ds.domain_dimensions * 2**2)
print all_data_level_2_s['density'].shape
(256, 256, 256)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/help/index.rst
--- a/doc/source/help/index.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/help/index.rst Sun Jun 15 19:50:51 2014 -0700
@@ -107,7 +107,7 @@
data_objects/analyzer_objects.py:class SlicePlotDataset(AnalysisTask):
data_objects/analyzer_objects.py: from yt.visualization.api import SlicePlot
data_objects/analyzer_objects.py: self.SlicePlot = SlicePlot
- data_objects/analyzer_objects.py: slc = self.SlicePlot(pf, self.axis, self.field, center = self.center)
+ data_objects/analyzer_objects.py: slc = self.SlicePlot(ds, self.axis, self.field, center = self.center)
...
You can now followup on this and open up the files that have references to
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/api/api.rst Sun Jun 15 19:50:51 2014 -0700
@@ -732,5 +732,5 @@
~yt.testing.assert_rel_equal
~yt.testing.amrspace
- ~yt.testing.fake_random_pf
+ ~yt.testing.fake_random_ds
~yt.testing.expand_keywords
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/changelog.rst
--- a/doc/source/reference/changelog.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/changelog.rst Sun Jun 15 19:50:51 2014 -0700
@@ -157,7 +157,7 @@
considerably. It's now easier than ever to load data from disk. If you know
how to get volumetric data into Python, you can use either the
``load_uniform_grid`` function or the ``load_amr_grid`` function to create an
-in-memory parameter file that yt can analyze.
+in-memory dataset that yt can analyze.
yt now supports the Athena code.
@@ -200,7 +200,7 @@
* Sidecar files containing expensive derived fields can be written and
implicitly loaded from.
* GDF files, which are portable yt-specific representations of full
- simulations, can be created from any parameter file. Work is underway on
+ simulations, can be created from any dataset. Work is underway on
a pure C library that can be linked against to load these files into
simulations.
@@ -228,7 +228,7 @@
* Many, many improvements to PlotWindow. If you're still using
PlotCollection, check out ``ProjectionPlot``, ``SlicePlot``,
``OffAxisProjectionPlot`` and ``OffAxisSlicePlot``.
- * PlotWindow can now accept a timeseries instead of a parameter file.
+ * PlotWindow can now accept a timeseries instead of a dataset.
* Many fixes for 1D and 2D data, especially in FLASH datasets.
* Vast improvements to the particle file handling for FLASH datasets.
* Particles can now be created ex nihilo with CICSample_3.
@@ -277,7 +277,7 @@
* Many improvements to Time Series analysis:
* EnzoSimulation now integrates with TimeSeries analysis!
* Auto-parallelization of analysis and parallel iteration
- * Memory usage when iterating over parameter files reduced substantially
+ * Memory usage when iterating over datasets reduced substantially
* Many improvements to Reason, the yt GUI
* Addition of "yt reason" as a startup command
* Keyboard shortcuts in projection & slice mode: z, Z, x, X for zooms,
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/command-line.rst
--- a/doc/source/reference/command-line.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/command-line.rst Sun Jun 15 19:50:51 2014 -0700
@@ -22,7 +22,7 @@
The other option, which is shorthand for "iyt plus dataset loading" is to use
the command-line tool (see :ref:`command-line`) with the ``load`` subcommand
-and to specify a parameter file. For instance:
+and to specify a dataset. For instance:
.. code-block:: bash
@@ -34,8 +34,8 @@
yt load DD0030/DD0030
-This will spawn ``iyt``, but the parameter file given on the command line will
-already be in the namespace as ``pf``. With interactive mode, you can use the
+This will spawn ``iyt``, but the dataset given on the command line will
+already be in the namespace as ``ds``. With interactive mode, you can use the
``pylab`` module to interactively plot.
Command-line Functions
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/configuration.rst
--- a/doc/source/reference/configuration.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/configuration.rst Sun Jun 15 19:50:51 2014 -0700
@@ -15,11 +15,11 @@
[yt]
loglevel = 1
- maximumstoredpfs = 10000
+ maximumstoreddatasets = 10000
This configuration file would set the logging threshold much lower, enabling
much more voluminous output from yt. Additionally, it increases the number of
-parameter files tracked between instantiations of yt.
+datasets tracked between instantiations of yt.
Configuration Options At Runtime
--------------------------------
@@ -44,8 +44,8 @@
ytcfg["yt", "loglevel"] = "1"
from yt.mods import *
- pf = load("my_data0001")
- pf.h.print_stats()
+ ds = load("my_data0001")
+ ds.print_stats()
This has the same effect as setting ``loglevel = 1`` in the configuration file.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/faq/index.rst
--- a/doc/source/reference/faq/index.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/faq/index.rst Sun Jun 15 19:50:51 2014 -0700
@@ -88,13 +88,13 @@
.. code-block:: python
- pf = load("my_data")
- pf.h
- pf.field_info['density'].take_log = False
+ ds = load("my_data")
+ ds.index
+ ds.field_info['density'].take_log = False
From that point forward, data products such as slices, projections, etc., would
-be presented in linear space. Note that you have to instantiate pf.h before you
-can access pf.field info.
+be presented in linear space. Note that you have to instantiate ds.index before
+you can access ds.field info.
.. _faq-handling-log-vs-linear-space:
@@ -109,8 +109,8 @@
.. code-block:: python
- pf = load("my_data")
- dd = pf.h.all_data()
+ ds = load("my_data")
+ dd = ds.all_data()
potential_field = dd["PotentialField"]
The same applies to fields you might derive inside your ``yt`` script
@@ -119,8 +119,8 @@
.. code-block:: python
- print pf.field_list
- print pf.derived_field_list
+ print ds.field_list
+ print ds.derived_field_list
.. _faq-old-data:
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/field_list.rst
--- a/doc/source/reference/field_list.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/field_list.rst Sun Jun 15 19:50:51 2014 -0700
@@ -9,15 +9,15 @@
everywhere, "Enzo" fields in Enzo datasets, "Orion" fields in Orion datasets,
and so on.
-Try using the ``pf.field_list`` and ``pf.derived_field_list`` to view the
+Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the
native and derived fields available for your dataset respectively. For example
to display the native fields in alphabetical order:
.. notebook-cell::
from yt.mods import *
- pf = load("Enzo_64/DD0043/data0043")
- for i in sorted(pf.field_list):
+ ds = load("Enzo_64/DD0043/data0043")
+ for i in sorted(ds.field_list):
print i
.. note:: Universal fields will be overridden by a code-specific field.
@@ -320,13 +320,13 @@
.. code-block:: python
def _Baryon_Overdensity(field, data):
- if data.pf.has_key('omega_baryon_now'):
- omega_baryon_now = data.pf['omega_baryon_now']
+ if data.ds.has_key('omega_baryon_now'):
+ omega_baryon_now = data.ds['omega_baryon_now']
else:
omega_baryon_now = 0.0441
return data['density'] / (omega_baryon_now * rho_crit_now *
- (data.pf.hubble_constant**2) *
- ((1+data.pf.current_redshift)**3))
+ (data.ds.hubble_constant**2) *
+ ((1+data.ds.current_redshift)**3))
**Convert Function Source**
@@ -545,7 +545,7 @@
.. code-block:: python
def _ComovingDensity(field, data):
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data["density"]/ef
@@ -704,9 +704,9 @@
.. code-block:: python
def _DensityPerturbation(field, data):
- rho_bar = rho_crit_now * data.pf.omega_matter * \
- data.pf.hubble_constant**2 * \
- (1.0 + data.pf.current_redshift)**3
+ rho_bar = rho_crit_now * data.ds.omega_matter * \
+ data.ds.hubble_constant**2 * \
+ (1.0 + data.ds.current_redshift)**3
return ((data['Matter_Density'] - rho_bar) / rho_bar)
@@ -743,7 +743,7 @@
def _DivV(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -754,11 +754,11 @@
ds = div_fac * data['dx'].flat[0]
f = data["x-velocity"][sl_right,1:-1,1:-1]/ds
f -= data["x-velocity"][sl_left ,1:-1,1:-1]/ds
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
ds = div_fac * data['dy'].flat[0]
f += data["y-velocity"][1:-1,sl_right,1:-1]/ds
f -= data["y-velocity"][1:-1,sl_left ,1:-1]/ds
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
ds = div_fac * data['dz'].flat[0]
f += data["z-velocity"][1:-1,1:-1,sl_right]/ds
f -= data["z-velocity"][1:-1,1:-1,sl_left ]/ds
@@ -814,7 +814,7 @@
else :
mw = mh
try:
- gammam1 = data.pf["Gamma"] - 1.0
+ gammam1 = data.ds["Gamma"] - 1.0
except:
gammam1 = 5./3. - 1.0
return kboltz * data["Temperature"] / \
@@ -1072,8 +1072,8 @@
.. code-block:: python
def _Convert_Overdensity(data):
- return 1.0 / (rho_crit_now * data.pf.hubble_constant**2 *
- (1+data.pf.current_redshift)**3)
+ return 1.0 / (rho_crit_now * data.ds.hubble_constant**2 *
+ (1+data.ds.current_redshift)**3)
ParticleAngularMomentumX
@@ -1534,7 +1534,7 @@
def _Pressure(field, data):
"""M{(Gamma-1.0)*rho*E}"""
- return (data.pf["Gamma"] - 1.0) * \
+ return (data.ds["Gamma"] - 1.0) * \
data["density"] * data["ThermalEnergy"]
@@ -1844,7 +1844,7 @@
of subtracting them)
"""
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -1853,7 +1853,7 @@
sl_right = slice(2,None,None)
div_fac = 2.0
new_field = np.zeros(data["x-velocity"].shape)
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
dvydx = (data["y-velocity"][sl_right,1:-1,1:-1] -
data["y-velocity"][sl_left,1:-1,1:-1]) \
/ (div_fac*data["dx"].flat[0])
@@ -1862,7 +1862,7 @@
/ (div_fac*data["dy"].flat[0])
new_field[1:-1,1:-1,1:-1] += (dvydx + dvxdy)**2.0
del dvydx, dvxdy
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
dvzdy = (data["z-velocity"][1:-1,sl_right,1:-1] -
data["z-velocity"][1:-1,sl_left,1:-1]) \
/ (div_fac*data["dy"].flat[0])
@@ -1916,7 +1916,7 @@
to determine if refinement should occur.
"""
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -1925,7 +1925,7 @@
sl_right = slice(2,None,None)
div_fac = 2.0
new_field = np.zeros(data["x-velocity"].shape)
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
dvydx = (data["y-velocity"][sl_right,1:-1,1:-1] -
data["y-velocity"][sl_left,1:-1,1:-1]) \
/ (div_fac*data["dx"].flat[0])
@@ -1934,7 +1934,7 @@
/ (div_fac*data["dy"].flat[0])
new_field[1:-1,1:-1,1:-1] += (dvydx + dvxdy)**2.0
del dvydx, dvxdy
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
dvzdy = (data["z-velocity"][1:-1,sl_right,1:-1] -
data["z-velocity"][1:-1,sl_left,1:-1]) \
/ (div_fac*data["dy"].flat[0])
@@ -1989,7 +1989,7 @@
(dvx + dvz)^2 ]^(0.5) / c_sound
"""
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -1998,7 +1998,7 @@
sl_right = slice(2,None,None)
div_fac = 2.0
new_field = np.zeros(data["x-velocity"].shape)
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
dvydx = (data["y-velocity"][sl_right,1:-1,1:-1] -
data["y-velocity"][sl_left,1:-1,1:-1]) \
/ (div_fac)
@@ -2007,7 +2007,7 @@
/ (div_fac)
new_field[1:-1,1:-1,1:-1] += (dvydx + dvxdy)**2.0
del dvydx, dvxdy
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
dvzdy = (data["z-velocity"][1:-1,sl_right,1:-1] -
data["z-velocity"][1:-1,sl_left,1:-1]) \
/ (div_fac)
@@ -2045,10 +2045,10 @@
.. code-block:: python
def _SoundSpeed(field, data):
- if data.pf["EOSType"] == 1:
+ if data.ds["EOSType"] == 1:
return np.ones(data["density"].shape, dtype='float64') * \
- data.pf["EOSSoundSpeed"]
- return ( data.pf["Gamma"]*data["Pressure"] / \
+ data.ds["EOSSoundSpeed"]
+ return ( data.ds["Gamma"]*data["Pressure"] / \
data["density"] )**(1.0/2.0)
@@ -2632,7 +2632,7 @@
def _VorticitySquared(field, data):
mylog.debug("Generating vorticity on %s", data)
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -2760,7 +2760,7 @@
def _VorticityX(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -2798,7 +2798,7 @@
def _VorticityY(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -2836,7 +2836,7 @@
def _VorticityZ(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -2872,9 +2872,9 @@
.. code-block:: python
def _DensityPerturbation(field, data):
- rho_bar = rho_crit_now * data.pf.omega_matter * \
- data.pf.hubble_constant**2 * \
- (1.0 + data.pf.current_redshift)**3
+ rho_bar = rho_crit_now * data.ds.omega_matter * \
+ data.ds.hubble_constant**2 * \
+ (1.0 + data.ds.current_redshift)**3
return ((data['Matter_Density'] - rho_bar) / rho_bar)
@@ -2883,22 +2883,22 @@
.. code-block:: python
def _convertConvergence(data):
- if not data.pf.parameters.has_key('cosmology_calculator'):
- data.pf.parameters['cosmology_calculator'] = Cosmology(
- HubbleConstantNow=(100.*data.pf.hubble_constant),
- OmegaMatterNow=data.pf.omega_matter, OmegaLambdaNow=data.pf.omega_lambda)
+ if not data.ds.parameters.has_key('cosmology_calculator'):
+ data.ds.parameters['cosmology_calculator'] = Cosmology(
+ HubbleConstantNow=(100.*data.ds.hubble_constant),
+ OmegaMatterNow=data.ds.omega_matter, OmegaLambdaNow=data.ds.omega_lambda)
# observer to lens
- DL = data.pf.parameters['cosmology_calculator'].AngularDiameterDistance(
- data.pf.parameters['observer_redshift'], data.pf.current_redshift)
+ DL = data.ds.parameters['cosmology_calculator'].AngularDiameterDistance(
+ data.ds.parameters['observer_redshift'], data.ds.current_redshift)
# observer to source
- DS = data.pf.parameters['cosmology_calculator'].AngularDiameterDistance(
- data.pf.parameters['observer_redshift'], data.pf.parameters['lensing_source_redshift'])
+ DS = data.ds.parameters['cosmology_calculator'].AngularDiameterDistance(
+ data.ds.parameters['observer_redshift'], data.ds.parameters['lensing_source_redshift'])
# lens to source
- DLS = data.pf.parameters['cosmology_calculator'].AngularDiameterDistance(
- data.pf.current_redshift, data.pf.parameters['lensing_source_redshift'])
- return (((DL * DLS) / DS) * (1.5e14 * data.pf.omega_matter *
- (data.pf.hubble_constant / speed_of_light_cgs)**2 *
- (1 + data.pf.current_redshift)))
+ DLS = data.ds.parameters['cosmology_calculator'].AngularDiameterDistance(
+ data.ds.current_redshift, data.ds.parameters['lensing_source_redshift'])
+ return (((DL * DLS) / DS) * (1.5e14 * data.ds.omega_matter *
+ (data.ds.hubble_constant / speed_of_light_cgs)**2 *
+ (1 + data.ds.current_redshift)))
XRayEmissivity
@@ -3303,7 +3303,7 @@
def _gradDensityX(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3338,7 +3338,7 @@
def _gradDensityY(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3373,7 +3373,7 @@
def _gradDensityZ(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3428,7 +3428,7 @@
def _gradPressureX(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3463,7 +3463,7 @@
def _gradPressureY(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3498,7 +3498,7 @@
def _gradPressureZ(field, data):
# We need to set up stencils
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -3786,7 +3786,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3805,7 +3805,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3824,7 +3824,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3843,7 +3843,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3862,7 +3862,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3881,7 +3881,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3900,7 +3900,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3919,7 +3919,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3938,7 +3938,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3957,7 +3957,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3976,7 +3976,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -3995,7 +3995,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -4014,7 +4014,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -4033,7 +4033,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -4052,7 +4052,7 @@
def _SpeciesComovingDensity(field, data):
sp = field.name.split("_")[0] + "_Density"
- ef = (1.0 + data.pf.current_redshift)**3.0
+ ef = (1.0 + data.ds.current_redshift)**3.0
return data[sp] / ef
@@ -4514,17 +4514,17 @@
def _H_NumberDensity(field, data):
field_data = np.zeros(data["density"].shape,
dtype=data["density"].dtype)
- if data.pf.parameters["MultiSpecies"] == 0:
+ if data.ds.parameters["MultiSpecies"] == 0:
field_data += data["density"] * \
- data.pf.parameters["HydrogenFractionByMass"]
- if data.pf.parameters["MultiSpecies"] > 0:
+ data.ds.parameters["HydrogenFractionByMass"]
+ if data.ds.parameters["MultiSpecies"] > 0:
field_data += data["HI_Density"]
field_data += data["HII_Density"]
- if data.pf.parameters["MultiSpecies"] > 1:
+ if data.ds.parameters["MultiSpecies"] > 1:
field_data += data["HM_Density"]
field_data += data["H2I_Density"]
field_data += data["H2II_Density"]
- if data.pf.parameters["MultiSpecies"] > 2:
+ if data.ds.parameters["MultiSpecies"] > 2:
field_data += data["HDI_Density"] / 2.0
return field_data
@@ -4820,24 +4820,24 @@
# but I am not currently implementing that
fieldData = np.zeros(data["density"].shape,
dtype = data["density"].dtype)
- if data.pf["MultiSpecies"] == 0:
+ if data.ds["MultiSpecies"] == 0:
if data.has_field_parameter("mu"):
mu = data.get_field_parameter("mu")
else:
mu = 0.6
fieldData += data["density"] / mu
- if data.pf["MultiSpecies"] > 0:
+ if data.ds["MultiSpecies"] > 0:
fieldData += data["HI_Density"] / 1.0
fieldData += data["HII_Density"] / 1.0
fieldData += data["HeI_Density"] / 4.0
fieldData += data["HeII_Density"] / 4.0
fieldData += data["HeIII_Density"] / 4.0
fieldData += data["Electron_Density"] / 1.0
- if data.pf["MultiSpecies"] > 1:
+ if data.ds["MultiSpecies"] > 1:
fieldData += data["HM_Density"] / 1.0
fieldData += data["H2I_Density"] / 2.0
fieldData += data["H2II_Density"] / 2.0
- if data.pf["MultiSpecies"] > 2:
+ if data.ds["MultiSpecies"] > 2:
fieldData += data["DI_Density"] / 2.0
fieldData += data["DII_Density"] / 2.0
fieldData += data["HDI_Density"] / 3.0
@@ -4862,7 +4862,7 @@
.. code-block:: python
def _ParticleAge(field, data):
- current_time = data.pf.current_time
+ current_time = data.ds.current_time
return (current_time - data["creation_time"])
@@ -4991,8 +4991,8 @@
def _StarAge(field, data):
star_age = np.zeros(data['StarCreationTimeYears'].shape)
with_stars = data['StarCreationTimeYears'] > 0
- star_age[with_stars] = data.pf.time_units['years'] * \
- data.pf.current_time - \
+ star_age[with_stars] = data.ds.time_units['years'] * \
+ data.ds.current_time - \
data['StarCreationTimeYears'][with_stars]
return star_age
@@ -5020,7 +5020,7 @@
.. code-block:: python
def _ConvertEnzoTimeYears(data):
- return data.pf.time_units['years']
+ return data.ds.time_units['years']
StarDynamicalTimeYears
@@ -5042,7 +5042,7 @@
.. code-block:: python
def _ConvertEnzoTimeYears(data):
- return data.pf.time_units['years']
+ return data.ds.time_units['years']
StarMetallicity
@@ -5078,13 +5078,13 @@
.. code-block:: python
def _ThermalEnergy(field, data):
- if data.pf["HydroMethod"] == 2:
+ if data.ds["HydroMethod"] == 2:
return data["TotalEnergy"]
- if data.pf["DualEnergyFormalism"]:
+ if data.ds["DualEnergyFormalism"]:
return data["GasEnergy"]
- if data.pf["HydroMethod"] in (4,6):
+ if data.ds["HydroMethod"] in (4,6):
return data["TotalEnergy"] - 0.5*(
data["x-velocity"]**2.0
+ data["y-velocity"]**2.0
@@ -5300,7 +5300,7 @@
def _dmpdensity(field, data):
blank = np.zeros(data.ActiveDimensions, dtype='float64')
if data["particle_position_x"].size == 0: return blank
- if 'creation_time' in data.pf.field_info:
+ if 'creation_time' in data.ds.field_info:
filter = data['creation_time'] <= 0.0
if not filter.any(): return blank
num = filter.sum()
@@ -5637,7 +5637,7 @@
"""M{(Gamma-1.0)*e, where e is thermal energy density
NB: this will need to be modified for radiation
"""
- return (data.pf["Gamma"] - 1.0)*data["ThermalEnergy"]
+ return (data.ds["Gamma"] - 1.0)*data["ThermalEnergy"]
**Convert Function Source**
@@ -5655,7 +5655,7 @@
.. code-block:: python
def _Temperature(field,data):
- return (data.pf["Gamma"]-1.0)*data.pf["mu"]*mh*data["ThermalEnergy"]/(kboltz*data["density"])
+ return (data.ds["Gamma"]-1.0)*data.ds["mu"]*mh*data["ThermalEnergy"]/(kboltz*data["density"])
**Convert Function Source**
@@ -5677,7 +5677,7 @@
implemented by Stella, but this isn't how it's called, so I'll
leave that commented out for now.
"""
- #if data.pf["DualEnergyFormalism"]:
+ #if data.ds["DualEnergyFormalism"]:
# return data["GasEnergy"]
#else:
return data["TotalEnergy"] - 0.5 * data["density"] * (
@@ -6205,7 +6205,7 @@
.. code-block:: python
def _Bx(fields, data):
- factor = GetMagRescalingFactor(data.pf)
+ factor = GetMagRescalingFactor(data.ds)
return data['magx']*factor
@@ -6224,7 +6224,7 @@
.. code-block:: python
def _By(fields, data):
- factor = GetMagRescalingFactor(data.pf)
+ factor = GetMagRescalingFactor(data.ds)
return data['magy']*factor
@@ -6243,7 +6243,7 @@
.. code-block:: python
def _Bz(fields, data):
- factor = GetMagRescalingFactor(data.pf)
+ factor = GetMagRescalingFactor(data.ds)
return data['magz']*factor
@@ -6353,7 +6353,7 @@
.. code-block:: python
def _DivB(fields, data):
- factor = GetMagRescalingFactor(data.pf)
+ factor = GetMagRescalingFactor(data.ds)
return data['divb']*factor
@@ -6906,14 +6906,14 @@
except:
pass
try:
- return data["Pressure"] / (data.pf["Gamma"] - 1.0) / data["density"]
+ return data["Pressure"] / (data.ds["Gamma"] - 1.0) / data["density"]
except:
pass
if data.has_field_parameter("mu") :
mu = data.get_field_parameter("mu")
else:
mu = 0.6
- return kboltz*data["density"]*data["Temperature"]/(mu*mh) / (data.pf["Gamma"] - 1.0)
+ return kboltz*data["density"]*data["Temperature"]/(mu*mh) / (data.ds["Gamma"] - 1.0)
**Convert Function Source**
@@ -7306,13 +7306,13 @@
.. code-block:: python
def _gasenergy(field, data) :
- if "pressure" in data.pf.field_info:
- return data["pressure"]/(data.pf["Gamma"]-1.0)/data["density"]
+ if "pressure" in data.ds.field_info:
+ return data["pressure"]/(data.ds["Gamma"]-1.0)/data["density"]
else:
eint = data["total_energy"] - 0.5*(data["momentum_x"]**2 +
data["momentum_y"]**2 +
data["momentum_z"]**2)/data["density"]
- if "cell_centered_B_x" in data.pf.field_info:
+ if "cell_centered_B_x" in data.ds.field_info:
eint -= 0.5*(data["cell_centered_B_x"]**2 +
data["cell_centered_B_y"]**2 +
data["cell_centered_B_z"]**2)
@@ -7339,17 +7339,17 @@
.. code-block:: python
def _pressure(field, data) :
- if "pressure" in data.pf.field_info:
+ if "pressure" in data.ds.field_info:
return data["pressure"]
else:
eint = data["total_energy"] - 0.5*(data["momentum_x"]**2 +
data["momentum_y"]**2 +
data["momentum_z"]**2)/data["density"]
- if "cell_centered_B_x" in data.pf.field_info:
+ if "cell_centered_B_x" in data.ds.field_info:
eint -= 0.5*(data["cell_centered_B_x"]**2 +
data["cell_centered_B_y"]**2 +
data["cell_centered_B_z"]**2)
- return eint*(data.pf["Gamma"]-1.0)
+ return eint*(data.ds["Gamma"]-1.0)
**Convert Function Source**
@@ -7393,7 +7393,7 @@
.. code-block:: python
def _xvelocity(field, data):
- if "velocity_x" in data.pf.field_info:
+ if "velocity_x" in data.ds.field_info:
return data["velocity_x"]
else:
return data["momentum_x"]/data["density"]
@@ -7418,7 +7418,7 @@
.. code-block:: python
def _yvelocity(field, data):
- if "velocity_y" in data.pf.field_info:
+ if "velocity_y" in data.ds.field_info:
return data["velocity_y"]
else:
return data["momentum_y"]/data["density"]
@@ -7443,7 +7443,7 @@
.. code-block:: python
def _zvelocity(field, data):
- if "velocity_z" in data.pf.field_info:
+ if "velocity_z" in data.ds.field_info:
return data["velocity_z"]
else:
return data["momentum_z"]/data["density"]
@@ -7521,7 +7521,7 @@
when radiation is accounted for.
"""
- return (data.pf["Gamma"] - 1.0) * data["ThermalEnergy"]
+ return (data.ds["Gamma"] - 1.0) * data["ThermalEnergy"]
**Convert Function Source**
@@ -7539,7 +7539,7 @@
.. code-block:: python
def _temperature(field, data):
- return ((data.pf["Gamma"] - 1.0) * data.pf["mu"] * mh *
+ return ((data.ds["Gamma"] - 1.0) * data.ds["mu"] * mh *
data["ThermalEnergy"] / (kboltz * data["density"]))
@@ -7564,7 +7564,7 @@
now.
"""
- #if data.pf["DualEnergyFormalism"]:
+ #if data.ds["DualEnergyFormalism"]:
# return data["Gas_Energy"]
#else:
return data["Total_Energy"] - 0.5 * data["density"] * (
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/python_introduction.rst
--- a/doc/source/reference/python_introduction.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/python_introduction.rst Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
called on it. ``dir()`` will return the available commands and objects that
can be directly called, and ``dir(something)`` will return information about
all the commands that ``something`` provides. This probably sounds a bit
-opaque, but it will become clearer with time -- it's also probably helpful to
+opaque, but it will become clearer with time -- it's also probably heldsul to
call ``help`` on any or all of the objects we create during this orientation.
To start up Python, at your prompt simply type:
@@ -136,7 +136,7 @@
lists and tuples. These two objects are very similar, in that they are
collections of arbitrary data types. We'll only look at collections of strings
and numbers for now, but these can be filled with arbitrary datatypes
-(including objects that yt provides, like spheres, parameter files, grids, and
+(including objects that yt provides, like spheres, datasets, grids, and
so on.) The easiest way to create a list is to simply construct one::
>>> my_list = []
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/reference/sharing_data.rst
--- a/doc/source/reference/sharing_data.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/reference/sharing_data.rst Sun Jun 15 19:50:51 2014 -0700
@@ -78,8 +78,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- proj = pf.proj(0, "density", weight="density")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ proj = ds.proj(0, "density", weight="density")
proj.hub_upload()
Here is an example of uploading a slice:
@@ -87,8 +87,8 @@
.. code-block:: python
from yt.mods import *
- pf = load("JHK-DD0030/galaxy0030")
- sl = pf.slice(0, 0.5, fields=["density"])
+ ds = load("JHK-DD0030/galaxy0030")
+ sl = ds.slice(0, 0.5, fields=["density"])
sl.hub_upload()
Uploading Notebooks
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
--- a/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb Sun Jun 15 19:50:51 2014 -0700
@@ -48,7 +48,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "pf = load('Enzo_64/DD0043/data0043')"
+ "ds = load('Enzo_64/DD0043/data0043')"
],
"language": "python",
"metadata": {},
@@ -65,7 +65,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "tfh = TransferFunctionHelper(pf)"
+ "tfh = TransferFunctionHelper(ds)"
],
"language": "python",
"metadata": {},
@@ -83,7 +83,7 @@
"collapsed": false,
"input": [
"# Build a transfer function that is a multivariate gaussian in Density\n",
- "tfh = TransferFunctionHelper(pf)\n",
+ "tfh = TransferFunctionHelper(ds)\n",
"tfh.set_field('temperature')\n",
"tfh.set_log(True)\n",
"tfh.set_bounds()\n",
@@ -123,7 +123,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "tfh = TransferFunctionHelper(pf)\n",
+ "tfh = TransferFunctionHelper(ds)\n",
"tfh.set_field('temperature')\n",
"tfh.set_bounds()\n",
"tfh.set_log(True)\n",
@@ -150,10 +150,10 @@
"collapsed": false,
"input": [
"L = [-0.1, -1.0, -0.1]\n",
- "c = pf.domain_center\n",
- "W = 1.5*pf.domain_width\n",
+ "c = ds.domain_center\n",
+ "W = 1.5*ds.domain_width\n",
"Npixels = 512 \n",
- "cam = pf.h.camera(c, L, W, Npixels, tfh.tf, fields=['temperature'],\n",
+ "cam = ds.camera(c, L, W, Npixels, tfh.tf, fields=['temperature'],\n",
" north_vector=[1.,0.,0.], steady_north=True, \n",
" sub_samples=5, no_ghost=False)\n",
"\n",
@@ -179,4 +179,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/_cb_docstrings.inc
--- a/doc/source/visualizing/_cb_docstrings.inc Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/_cb_docstrings.inc Sun Jun 15 19:50:51 2014 -0700
@@ -9,8 +9,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'), center='max')
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'), center='max')
slc.annotate_arrow((0.5, 0.5, 0.5), (1, 'kpc'))
slc.save()
@@ -28,8 +28,8 @@
from yt.mods import *
from yt.analysis_modules.level_sets.api import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- data_source = pf.disk([0.5, 0.5, 0.5], [0., 0., 1.],
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.],
(8., 'kpc'), (1., 'kpc'))
c_min = 10**np.floor(np.log10(data_source['density']).min() )
@@ -40,7 +40,7 @@
find_clumps(master_clump, c_min, c_max, 2.0)
leaf_clumps = get_lowest_clumps(master_clump)
- prj = ProjectionPlot(pf, 2, 'density', center='c', width=(20,'kpc'))
+ prj = ProjectionPlot(ds, 2, 'density', center='c', width=(20,'kpc'))
prj.annotate_clumps(leaf_clumps)
prj.save('clumps')
@@ -59,8 +59,8 @@
.. python-script::
from yt.mods import *
- pf = load("Enzo_64/DD0043/data0043")
- s = SlicePlot(pf, "x", ["density"], center="max")
+ ds = load("Enzo_64/DD0043/data0043")
+ s = SlicePlot(ds, "x", ["density"], center="max")
s.annotate_contour("temperature")
s.save()
@@ -77,8 +77,8 @@
.. python-script::
from yt.mods import *
- pf = load("Enzo_64/DD0043/data0043")
- s = OffAxisSlicePlot(pf, [1,1,0], ["density"], center="c")
+ ds = load("Enzo_64/DD0043/data0043")
+ s = OffAxisSlicePlot(ds, [1,1,0], ["density"], center="c")
s.annotate_cquiver('cutting_plane_velocity_x', 'cutting_plane_velocity_y', 10)
s.zoom(1.5)
s.save()
@@ -97,8 +97,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'), center='max')
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'), center='max')
slc.annotate_grids()
slc.save()
@@ -120,13 +120,13 @@
.. python-script::
from yt.mods import *
- data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
- halos_pf = load('rockstar_halos/halos_0.0.bin')
+ data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+ halos_ds = load('rockstar_halos/halos_0.0.bin')
- hc = HaloCatalog(halos_pf=halos_pf)
+ hc = HaloCatalog(halos_ds=halos_ds)
hc.create()
- prj = ProjectionPlot(data_pf, 'z', 'density')
+ prj = ProjectionPlot(data_ds, 'z', 'density')
prj.annotate_halos(hc)
prj.save()
@@ -142,8 +142,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
p.annotate_image_line((0.3, 0.4), (0.8, 0.9), plot_args={'linewidth':5})
p.save()
@@ -158,8 +158,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
p.annotate_line([-6, -4, -2, 0, 2, 4, 6], [3.6, 1.6, 0.4, 0, 0.4, 1.6, 3.6], plot_args={'linewidth':5})
p.save()
@@ -181,10 +181,10 @@
.. python-script::
from yt.mods import *
- pf = load("MHDSloshing/virgo_low_res.0054.vtk",
+ ds = load("MHDSloshing/virgo_low_res.0054.vtk",
parameters={"TimeUnits":3.1557e13, "LengthUnits":3.0856e24,
"DensityUnits":6.770424595218825e-27})
- p = ProjectionPlot(pf, 'z', 'density', center='c', width=(300, 'kpc'))
+ p = ProjectionPlot(ds, 'z', 'density', center='c', width=(300, 'kpc'))
p.annotate_magnetic_field()
p.save()
@@ -201,8 +201,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- s = SlicePlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ s = SlicePlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
s.annotate_marker([0.5, 0.5, 0.5], plot_args={'s':10000})
s.save()
@@ -224,8 +224,8 @@
.. python-script::
from yt.mods import *
- pf = load("Enzo_64/DD0043/data0043")
- p = ProjectionPlot(pf, "x", "density", center='m', width=(10, 'Mpc'))
+ ds = load("Enzo_64/DD0043/data0043")
+ p = ProjectionPlot(ds, "x", "density", center='m', width=(10, 'Mpc'))
p.annotate_particles((10, 'Mpc'))
p.save()
@@ -242,8 +242,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
p.annotate_point([0.5, 0.496, 0.5], "What's going on here?", text_args={'size':'xx-large', 'color':'w'})
p.save()
@@ -262,8 +262,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center=[0.5, 0.5, 0.5],
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center=[0.5, 0.5, 0.5],
weight_field='density', width=(20, 'kpc'))
p.annotate_quiver('velocity_x', 'velocity_y', 16)
p.save()
@@ -281,8 +281,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center='c', width=(20, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center='c', width=(20, 'kpc'))
p.annotate_sphere([0.5, 0.5, 0.5], (2, 'kpc'), {'fill':True})
p.save()
@@ -303,8 +303,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- s = SlicePlot(pf, 'z', 'density', center='c', width=(20, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ s = SlicePlot(ds, 'z', 'density', center='c', width=(20, 'kpc'))
s.annotate_streamlines('velocity_x', 'velocity_y')
s.save()
@@ -322,8 +322,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- s = SlicePlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ s = SlicePlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
s.annotate_text((0.5, 0.5), 'Sample text', text_args={'size':'xx-large', 'color':'w'})
s.save()
@@ -338,8 +338,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, 'z', 'density', center='c', width=(20, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, 'z', 'density', center='c', width=(20, 'kpc'))
p.annotate_title('Density plot')
p.save()
@@ -362,7 +362,7 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = SlicePlot(pf, 'z', 'density', center='m', width=(10, 'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = SlicePlot(ds, 'z', 'density', center='m', width=(10, 'kpc'))
p.annotate_velocity()
p.save()
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/callbacks.rst
--- a/doc/source/visualizing/callbacks.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/callbacks.rst Sun Jun 15 19:50:51 2014 -0700
@@ -23,7 +23,7 @@
.. code-block:: python
- slc = SlicePlot(pf,0,'density')
+ slc = SlicePlot(ds,0,'density')
slc.annotate_title('This is a Density plot')
would add the :func:`title` callback to the plot object. All of the
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/colormaps/cmap_images.py
--- a/doc/source/visualizing/colormaps/cmap_images.py Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/colormaps/cmap_images.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,10 +2,10 @@
import matplotlib.cm as cm
# Load the dataset.
-pf = load(os.path.join(ytcfg.get("yt", "test_data_dir"), "IsolatedGalaxy/galaxy0030/galaxy0030"))
+ds = load(os.path.join(ytcfg.get("yt", "test_data_dir"), "IsolatedGalaxy/galaxy0030/galaxy0030"))
# Create projections using each colormap available.
-p = ProjectionPlot(pf, "z", "density", weight_field = "density", width=0.4)
+p = ProjectionPlot(ds, "z", "density", weight_field = "density", width=0.4)
for cmap in cm.datad:
if cmap.startswith("idl"):
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/colormaps/index.rst
--- a/doc/source/visualizing/colormaps/index.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/colormaps/index.rst Sun Jun 15 19:50:51 2014 -0700
@@ -61,8 +61,8 @@
.. code-block:: python
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, "z", "density")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, "z", "density")
p.set_cmap(field="density", cmap='jet')
p.save('proj_with_jet_cmap.png')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/manual_plotting.rst
--- a/doc/source/visualizing/manual_plotting.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/manual_plotting.rst Sun Jun 15 19:50:51 2014 -0700
@@ -36,10 +36,10 @@
import pylab as P
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- c = pf.h.find_max('density')[1]
- proj = pf.proj('density', 0)
+ c = ds.find_max('density')[1]
+ proj = ds.proj('density', 0)
width = (10, 'kpc') # we want a 1.5 mpc view
res = [1000, 1000] # create an image with 1000x1000 pixels
@@ -64,16 +64,16 @@
Line Plots
----------
-This is perhaps the simplest thing to do. ``yt`` provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary access. As a simple example, take a :class:`~yt.data_objects.data_containers.AMROrthoRayBase` object, which can be created from a index by calling ``pf.ortho_ray(axis, center)``.
+This is perhaps the simplest thing to do. ``yt`` provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary access. As a simple example, take a :class:`~yt.data_objects.data_containers.AMROrthoRayBase` object, which can be created from a index by calling ``ds.ortho_ray(axis, center)``.
.. python-script::
from yt.mods import *
import pylab as P
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- c = pf.h.find_max("density")[1]
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ c = ds.find_max("density")[1]
ax = 0 # take a line cut along the x axis
- ray = pf.ortho_ray(ax, (c[1], c[2])) # cutting through the y0,z0 such that we hit the max density
+ ray = ds.ortho_ray(ax, (c[1], c[2])) # cutting through the y0,z0 such that we hit the max density
P.subplot(211)
P.semilogy(np.array(ray['x']), np.array(ray['density']))
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/plots.rst Sun Jun 15 19:50:51 2014 -0700
@@ -25,7 +25,7 @@
If you need to take a quick look at a single simulation output, ``yt``
provides the ``PlotWindow`` interface for generating annotated 2D
visualizations of simulation data. You can create a ``PlotWindow`` plot by
-supplying a parameter file, a list of fields to plot, and a plot center to
+supplying a dataset, a list of fields to plot, and a plot center to
create a :class:`~yt.visualization.plot_window.SlicePlot`,
:class:`~yt.visualization.plot_window.ProjectionPlot`, or
:class:`~yt.visualization.plot_window.OffAxisSlicePlot`.
@@ -51,11 +51,11 @@
:class:`~yt.visualization.plot_window.SlicePlot`. Say we want to visualize a
slice through the Density field along the z-axis centered on the center of the
simulation box in a simulation dataset we've opened and stored in the parameter
-file object ``pf``. This can be accomplished with the following command:
+file object ``ds``. This can be accomplished with the following command:
.. code-block:: python
- >>> slc = SlicePlot(pf, 'z', 'density')
+ >>> slc = SlicePlot(ds, 'z', 'density')
>>> slc.save()
These two commands will create a slice object and store it in a variable we've
@@ -66,7 +66,7 @@
.. code-block:: python
- >>> SlicePlot(pf, 'z', 'density').save()
+ >>> SlicePlot(ds, 'z', 'density').save()
It's nice to keep the slice object around if you want to modify the plot. By
default, the plot width will be set to the size of the simulation box. To zoom
@@ -75,19 +75,19 @@
.. code-block:: python
- >>> slc = SlicePlot(pf, 'z', 'density')
+ >>> slc = SlicePlot(ds, 'z', 'density')
>>> slc.zoom(10)
>>> slc.save('zoom')
This will save a new plot to disk with a different filename - prepended with
-'zoom' instead of the name of the parameter file. If you want to set the width
+'zoom' instead of the name of the dataset. If you want to set the width
manually, you can do that as well. For example, the following sequence of
commands will create a slice, set the width of the plot to 10 kiloparsecs, and
save it to disk.
.. code-block:: python
- >>> slc = SlicePlot(pf, 'z', 'density')
+ >>> slc = SlicePlot(ds, 'z', 'density')
>>> slc.set_width((10,'kpc'))
>>> slc.save('10kpc')
@@ -96,7 +96,7 @@
.. code-block:: python
- >>> SlicePlot(pf, 'z', 'density', center=[0.2, 0.3, 0.8],
+ >>> SlicePlot(ds, 'z', 'density', center=[0.2, 0.3, 0.8],
... width = (10,'kpc')).save()
The center must be given in code units. Optionally, you can supply 'c' or 'm'
@@ -108,8 +108,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', center=[0.5, 0.5, 0.5], width=(20,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', center=[0.5, 0.5, 0.5], width=(20,'kpc'))
slc.save()
The above example will display an annotated plot of a slice of the
@@ -124,8 +124,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'pressure', center='c')
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'pressure', center='c')
slc.save()
slc.zoom(30)
slc.save('zoom')
@@ -146,8 +146,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.annotate_grids()
slc.save()
@@ -175,8 +175,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- prj = ProjectionPlot(pf, 2, 'density', width=(25, 'kpc'),
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ prj = ProjectionPlot(ds, 2, 'density', width=(25, 'kpc'),
weight_field=None)
prj.save()
@@ -200,16 +200,16 @@
:class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase` to slice
through simulation domains at an arbitrary oblique angle. A
:class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be
-instantiated by specifying a parameter file, the normal to the cutting
+instantiated by specifying a dataset, the normal to the cutting
plane, and the name of the fields to plot. For example:
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1,1,0] # vector normal to cutting plane
north_vector = [-1,1,0]
- cut = OffAxisSlicePlot(pf, L, 'density', width=(25, 'kpc'),
+ cut = OffAxisSlicePlot(ds, L, 'density', width=(25, 'kpc'),
north_vector=north_vector)
cut.save()
@@ -247,14 +247,14 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1,1,0] # vector normal to cutting plane
north_vector = [-1,1,0]
W = [0.02, 0.02, 0.02]
c = [0.5, 0.5, 0.5]
N = 512
- image = off_axis_projection(pf, c, L, W, N, "density")
- write_image(np.log10(image), "%s_offaxis_projection.png" % pf)
+ image = off_axis_projection(ds, c, L, W, N, "density")
+ write_image(np.log10(image), "%s_offaxis_projection.png" % ds)
Here, ``W`` is the width of the projection in the x, y, *and* z
directions.
@@ -269,10 +269,10 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1,1,0] # vector normal to cutting plane
north_vector = [-1,1,0]
- prj = OffAxisProjectionPlot(pf,L,'density',width=(25, 'kpc'),
+ prj = OffAxisProjectionPlot(ds,L,'density',width=(25, 'kpc'),
north_vector=north_vector)
prj.save()
@@ -292,8 +292,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.save()
Panning and zooming
@@ -308,8 +308,8 @@
from yt.mods import *
from yt.units import kpc
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.pan((2*kpc, 2*kpc))
slc.save()
@@ -319,8 +319,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.pan_rel((0.1, -0.1))
slc.save()
@@ -329,8 +329,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.zoom(2)
slc.save()
@@ -343,8 +343,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_axes_unit('Mpc')
slc.save()
@@ -357,8 +357,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_center((0.5, 0.5))
slc.save()
@@ -370,8 +370,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_font({'family': 'sans-serif', 'style': 'italic','weight': 'bold', 'size': 24})
slc.save()
@@ -389,8 +389,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_cmap('density', 'RdBu_r')
slc.save()
@@ -401,8 +401,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_log('density', False)
slc.save()
@@ -412,8 +412,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_zlim('density', 1e-30, 1e-25)
slc.save()
@@ -428,8 +428,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_window_size(10)
slc.save()
@@ -439,8 +439,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- slc = SlicePlot(pf, 'z', 'density', width=(10,'kpc'))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ slc = SlicePlot(ds, 'z', 'density', width=(10,'kpc'))
slc.set_buff_size(1600)
slc.save()
@@ -465,8 +465,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- my_galaxy = pf.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], 0.01, 0.003)
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ my_galaxy = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], 0.01, 0.003)
plot = ProfilePlot(my_galaxy, "density", ["temperature"])
plot.save()
@@ -484,8 +484,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- my_sphere = pf.sphere([0.5, 0.5, 0.5], (100, "kpc"))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ my_sphere = ds.sphere([0.5, 0.5, 0.5], (100, "kpc"))
plot = ProfilePlot(my_sphere, "temperature", ["cell_mass"],
weight_field=None)
plot.save()
@@ -540,16 +540,16 @@
labels = []
# Loop over each dataset in the time-series.
- for pf in es:
+ for ds in es:
# Create a data container to hold the whole dataset.
- ad = pf.h.all_data()
+ ad = ds.all_data()
# Create a 1d profile of density vs. temperature.
profiles.append(create_profile(ad, ["temperature"],
fields=["cell_mass"],
weight_field=None,
accumulation=True))
# Add labels
- labels.append("z = %.2f" % pf.current_redshift)
+ labels.append("z = %.2f" % ds.current_redshift)
# Create the profile plot from the list of profiles.
plot = ProfilePlot.from_profiles(profiles, labels=labels)
@@ -590,8 +590,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- my_sphere = pf.sphere("c", (50, "kpc"))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ my_sphere = ds.sphere("c", (50, "kpc"))
plot = PhasePlot(my_sphere, "density", "temperature", ["cell_mass"],
weight_field=None)
plot.save()
@@ -603,8 +603,8 @@
.. python-script::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- my_sphere = pf.sphere("c", (50, "kpc"))
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ my_sphere = ds.sphere("c", (50, "kpc"))
plot = PhasePlot(my_sphere, "density", "temperature", ["H_fraction"],
weight_field="cell_mass")
plot.save()
@@ -647,8 +647,8 @@
.. notebook-cell::
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(pf, "x", "density", center='m', width=(10,'kpc'),
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = ProjectionPlot(ds, "x", "density", center='m', width=(10,'kpc'),
weight_field='density')
p.show()
@@ -682,7 +682,7 @@
.. code-block:: python
>>> import yt.visualization.eps_writer as eps
- >>> slc = SlicePlot(pf, 'z', 'density')
+ >>> slc = SlicePlot(ds, 'z', 'density')
>>> slc.set_width(25, 'kpc')
>>> eps_fig = eps.single_plot(slc)
>>> eps_fig.save_fig('zoom', format='eps')
@@ -707,7 +707,7 @@
.. code-block:: python
>>> import yt.visualization.eps_writer as eps
- >>> slc = SlicePlot(pf, 'z', ['density', 'temperature', 'Pressure',
+ >>> slc = SlicePlot(ds, 'z', ['density', 'temperature', 'Pressure',
'VelocityMagnitude'])
>>> slc.set_width(25, 'kpc')
>>> eps_fig = eps.multiplot_yt(2, 2, slc, bare_axes=True)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/sketchfab.rst
--- a/doc/source/visualizing/sketchfab.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/sketchfab.rst Sun Jun 15 19:50:51 2014 -0700
@@ -34,9 +34,9 @@
.. code-block:: python
from yt.mods import *
- pf = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
- sphere = pf.sphere("max", (1.0, "mpc"))
- surface = pf.surface(sphere, "density", 1e-27)
+ ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ sphere = ds.sphere("max", (1.0, "mpc"))
+ surface = ds.surface(sphere, "density", 1e-27)
This object, ``surface``, can now be queried for values on the surface. For
instance:
@@ -93,14 +93,14 @@
.. code-block:: python
from yt.mods import *
- pf = load("redshift0058")
- dd = pf.sphere("max", (200, "kpc"))
+ ds = load("redshift0058")
+ dd = ds.sphere("max", (200, "kpc"))
rho = 5e-27
- bounds = [(dd.center[i] - 100.0/pf['kpc'],
- dd.center[i] + 100.0/pf['kpc']) for i in range(3)]
+ bounds = [(dd.center[i] - 100.0/ds['kpc'],
+ dd.center[i] + 100.0/ds['kpc']) for i in range(3)]
- surf = pf.surface(dd, "density", rho)
+ surf = ds.surface(dd, "density", rho)
upload_id = surf.export_sketchfab(
title = "RD0058 - 5e-27",
@@ -146,14 +146,14 @@
from yt.mods import *
- pf = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = './surfaces'
- sphere = pf.sphere("max", (1.0, "mpc"))
+ sphere = ds.sphere("max", (1.0, "mpc"))
for i,r in enumerate(rho):
- surf = pf.surface(sphere, 'density', r)
+ surf = ds.surface(sphere, 'density', r)
surf.export_obj(filename, transparency = trans[i], color_field='temperature', plot_index = i)
The calling sequence is fairly similar to the ``export_ply`` function
@@ -218,7 +218,7 @@
from yt.mods import *
- pf = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = './surfaces'
@@ -227,9 +227,9 @@
return (data['density']*data['density']*np.sqrt(data['temperature']))
add_field("Emissivity", function=_Emissivity, units=r"\rm{g K}/\rm{cm}^{6}")
- sphere = pf.sphere("max", (1.0, "mpc"))
+ sphere = ds.sphere("max", (1.0, "mpc"))
for i,r in enumerate(rho):
- surf = pf.surface(sphere, 'density', r)
+ surf = ds.surface(sphere, 'density', r)
surf.export_obj(filename, transparency = trans[i],
color_field='temperature', emit_field = 'Emissivity',
plot_index = i)
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/streamlines.rst
--- a/doc/source/visualizing/streamlines.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/streamlines.rst Sun Jun 15 19:50:51 2014 -0700
@@ -56,14 +56,14 @@
from yt.mods import *
from yt.visualization.api import Streamlines
- pf = load('DD1701') # Load pf
+ ds = load('DD1701') # Load ds
c = np.array([0.5]*3)
N = 100
scale = 1.0
pos_dx = np.random.random((N,3))*scale-scale/2.
pos = c+pos_dx
- streamlines = Streamlines(pf,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
+ streamlines = Streamlines(ds,pos,'velocity_x', 'velocity_y', 'velocity_z', length=1.0)
streamlines.integrate_through_volume()
import matplotlib.pylab as pl
@@ -97,8 +97,8 @@
from yt.mods import *
from yt.visualization.api import Streamlines
- pf = load('DD1701') # Load pf
- streamlines = Streamlines(pf, [0.5]*3)
+ ds = load('DD1701') # Load ds
+ streamlines = Streamlines(ds, [0.5]*3)
streamlines.integrate_through_volume()
stream = streamlines.path(0)
matplotlib.pylab.semilogy(stream['t'], stream['density'], '-x')
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/visualizing/volume_rendering.rst Sun Jun 15 19:50:51 2014 -0700
@@ -42,14 +42,14 @@
from yt.mods import *
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Choose a field
field = 'density'
# Do you want the log of the field?
use_log = True
# Find the bounds in log space of for your field
- dd = pf.h.all_data()
+ dd = ds.all_data()
mi, ma = dd.quantities["Extrema"](field)[0]
if use_log:
@@ -59,13 +59,13 @@
tf = ColorTransferFunction((mi, ma))
# Set up the camera parameters: center, looking direction, width, resolution
- c = (pf.domain_right_edge + pf.domain_left_edge)/2.0
+ c = (ds.domain_right_edge + ds.domain_left_edge)/2.0
L = np.array([1.0, 1.0, 1.0])
- W = 0.3 / pf["unitary"]
+ W = 0.3 / ds["unitary"]
N = 256
# Create a camera object
- cam = pf.h.camera(c, L, W, N, tf, fields = [field], log_fields = [use_log])
+ cam = ds.camera(c, L, W, N, tf, fields = [field], log_fields = [use_log])
# Now let's add some isocontours, and take a snapshot, saving the image
# to a file.
@@ -280,8 +280,8 @@
from yt.mods import *
import yt.visualization.volume_rendering.camera as camera
- pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- image = camera.allsky_projection(pf, [0.5,0.5,0.5], 100.0/pf['kpc'],
+ ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ image = camera.allsky_projection(ds, [0.5,0.5,0.5], 100.0/ds['kpc'],
64, "density")
camera.plot_allsky_healpix(image, 64, "allsky.png", "Column Density [g/cm^2]")
@@ -300,9 +300,9 @@
import yt.visualization.volume_rendering.camera as camera
Nside = 32
- pf = load("DD0008/galaxy0008")
+ ds = load("DD0008/galaxy0008")
cam = camera.HEALpixCamera([0.5,0.5,0.5], 0.2, Nside,
- pf = pf, log_fields = [False])
+ ds = ds, log_fields = [False])
bitmap = cam.snapshot()
The returned bitmap will, as per usual, be an array of integrated values.
diff -r f20d58ca2848 -r 67507b4f8da9 doc/source/yt3differences.rst
--- a/doc/source/yt3differences.rst Sun Jun 15 07:46:08 2014 -0500
+++ b/doc/source/yt3differences.rst Sun Jun 15 19:50:51 2014 -0700
@@ -128,7 +128,7 @@
Field Info
++++++++++
-In the past, the object ``ds`` (or ``pf``) had a ``field_info`` object which
+In the past, the object ``ds`` (or ``ds``) had a ``field_info`` object which
was a dictionary leading to derived field definitions. At the present time,
because of the field naming changes (i.e., access-by-tuple) it is better to
utilize the function ``_get_field_info`` than to directly access the
@@ -143,8 +143,8 @@
++++++++++++++++++++++++++++++++
Wherever possible, we have attempted to replace the term "parameter file"
-(i.e., ``pf``) with the term "dataset." Future revisions will change most of
-the ``pf`` atrributes of objects into ``ds`` or ``dataset`` attributes.
+(i.e., ``ds``) with the term "dataset." Future revisions will change most of
+the ``ds`` atrributes of objects into ``ds`` or ``dataset`` attributes.
Projection Argument Order
+++++++++++++++++++++++++
diff -r f20d58ca2848 -r 67507b4f8da9 roman
--- a/roman Sun Jun 15 07:46:08 2014 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,40 +0,0 @@
-diff -r ce576a114f13 yt/data_objects/grid_patch.py
---- a/yt/data_objects/grid_patch.py Mon Jul 08 08:56:00 2013 -0400
-+++ b/yt/data_objects/grid_patch.py Wed Jul 10 09:29:30 2013 -0400
-@@ -51,6 +51,7 @@
- _num_ghost_zones = 0
- _grids = None
- _id_offset = 1
-+ _blanked = False
-
- _type_name = 'grid'
- _skip_add = True
-@@ -268,6 +269,8 @@
- def _get_child_mask(self):
- if self._child_mask == None:
- self.__generate_child_mask()
-+ if self._blanked:
-+ self._child_mask[:] = False
- return self._child_mask
-
- def _get_child_indices(self):
-@@ -491,6 +494,8 @@
- return vals.reshape(self.ActiveDimensions, order="C")
-
- def _get_selector_mask(self, selector):
-+ if self._blanked == True:
-+ return None
- if id(selector) == self._last_selector_id:
- mask = self._last_mask
- else:
-diff -r ce576a114f13 yt/visualization/volume_rendering/camera.py
---- a/yt/visualization/volume_rendering/camera.py Mon Jul 08 08:56:00 2013 -0400
-+++ b/yt/visualization/volume_rendering/camera.py Wed Jul 10 09:29:30 2013 -0400
-@@ -2141,6 +2141,7 @@
- source = pf.region(self.center, mi, ma)
-
- for i, (grid, mask) in enumerate(source.blocks):
-+ mask[:] = 1
- data = [(grid[field] * mask).astype("float64") for field in fields]
- pg = PartitionedGrid(
- grid.id, data,
diff -r f20d58ca2848 -r 67507b4f8da9 scripts/iyt
--- a/scripts/iyt Sun Jun 15 07:46:08 2014 -0500
+++ b/scripts/iyt Sun Jun 15 19:50:51 2014 -0700
@@ -126,7 +126,7 @@
if isinstance(obj, (YTDataContainer, ) ):
#print "COMPLETING ON THIS THING"
all_fields = [f for f in sorted(
- obj.pf.field_list + obj.pf.derived_field_list)]
+ obj.ds.field_list + obj.ds.derived_field_list)]
#matches = self.Completer.python_matches(text)
#print "RETURNING ", all_fields
return all_fields
diff -r f20d58ca2848 -r 67507b4f8da9 sl.py
--- a/sl.py Sun Jun 15 07:46:08 2014 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,6 +0,0 @@
-from yt.mods import *
-pf = load("tests/DD0010/moving7_0010")
-#sl = pf.slice(0, 0.5)
-#print sl["Density"]
-sl = SlicePlot(pf, "x", "Density")
-sl.save()
diff -r f20d58ca2848 -r 67507b4f8da9 tests/hierarchy_consistency.py
--- a/tests/hierarchy_consistency.py Sun Jun 15 07:46:08 2014 -0500
+++ b/tests/hierarchy_consistency.py Sun Jun 15 19:50:51 2014 -0700
@@ -14,7 +14,7 @@
def run(self):
self.result = \
- all(g in ensure_list(c.Parent) for g in self.pf.index.grids
+ all(g in ensure_list(c.Parent) for g in self.ds.index.grids
for c in g.Children)
def compare(self, old_result):
@@ -25,11 +25,11 @@
name = "level_consistency"
def run(self):
- self.result = dict(grid_left_edge=self.pf.grid_left_edge,
- grid_right_edge=self.pf.grid_right_edge,
- grid_levels=self.pf.grid_levels,
- grid_particle_count=self.pf.grid_particle_count,
- grid_dimensions=self.pf.grid_dimensions)
+ self.result = dict(grid_left_edge=self.ds.grid_left_edge,
+ grid_right_edge=self.ds.grid_right_edge,
+ grid_levels=self.ds.grid_levels,
+ grid_particle_count=self.ds.grid_particle_count,
+ grid_dimensions=self.ds.grid_dimensions)
def compare(self, old_result):
# We allow now difference between these values
@@ -47,7 +47,7 @@
def run(self):
self.result = [[p.id for p in ensure_list(g.Parent) \
if g.Parent is not None]
- for g in self.pf.index.grids]
+ for g in self.ds.index.grids]
def compare(self, old_result):
if len(old_result) != len(self.result):
@@ -63,7 +63,7 @@
def run(self):
self.result = na.array([g.get_global_startindex()
- for g in self.pf.index.grids])
+ for g in self.ds.index.grids])
def compare(self, old_result):
self.compare_array_delta(old_result, self.result, 0.0)
diff -r f20d58ca2848 -r 67507b4f8da9 tests/object_field_values.py
--- a/tests/object_field_values.py Sun Jun 15 07:46:08 2014 -0500
+++ b/tests/object_field_values.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,36 +20,36 @@
@register_object
def centered_sphere(tobj):
- center = 0.5 * (tobj.pf.domain_right_edge + tobj.pf.domain_left_edge)
- width = (tobj.pf.domain_right_edge - tobj.pf.domain_left_edge).max()
- tobj.data_object = tobj.pf.sphere(center, width / 0.25)
+ center = 0.5 * (tobj.ds.domain_right_edge + tobj.ds.domain_left_edge)
+ width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
+ tobj.data_object = tobj.ds.sphere(center, width / 0.25)
@register_object
def off_centered_sphere(tobj):
- center = 0.5 * (tobj.pf.domain_right_edge + tobj.pf.domain_left_edge)
- width = (tobj.pf.domain_right_edge - tobj.pf.domain_left_edge).max()
- tobj.data_object = tobj.pf.sphere(center - 0.25 * width, width / 0.25)
+ center = 0.5 * (tobj.ds.domain_right_edge + tobj.ds.domain_left_edge)
+ width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
+ tobj.data_object = tobj.ds.sphere(center - 0.25 * width, width / 0.25)
@register_object
def corner_sphere(tobj):
- width = (tobj.pf.domain_right_edge - tobj.pf.domain_left_edge).max()
- tobj.data_object = tobj.pf.sphere(tobj.pf.domain_left_edge, width / 0.25)
+ width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
+ tobj.data_object = tobj.ds.sphere(tobj.ds.domain_left_edge, width / 0.25)
@register_object
def disk(self):
- center = (self.pf.domain_right_edge + self.pf.domain_left_edge) / 2.
- radius = (self.pf.domain_right_edge - self.pf.domain_left_edge).max() / 10.
- height = (self.pf.domain_right_edge - self.pf.domain_left_edge).max() / 10.
+ center = (self.ds.domain_right_edge + self.ds.domain_left_edge) / 2.
+ radius = (self.ds.domain_right_edge - self.ds.domain_left_edge).max() / 10.
+ height = (self.ds.domain_right_edge - self.ds.domain_left_edge).max() / 10.
normal = na.array([1.] * 3)
- self.data_object = self.pf.disk(center, normal, radius, height)
+ self.data_object = self.ds.disk(center, normal, radius, height)
@register_object
def all_data(self):
- self.data_object = self.pf.h.all_data()
+ self.data_object = self.ds.h.all_data()
_new_known_objects = {}
for field in ["Density"]: # field_list:
diff -r f20d58ca2848 -r 67507b4f8da9 tests/runall.py
--- a/tests/runall.py Sun Jun 15 07:46:08 2014 -0500
+++ b/tests/runall.py Sun Jun 15 19:50:51 2014 -0700
@@ -90,19 +90,19 @@
print ("\n ".join(tests))
sys.exit(0)
- # Load the test pf and make sure it's good.
- pf = load(opts.parameter_file)
- if pf is None:
+ # Load the test ds and make sure it's good.
+ ds = load(opts.parameter_file)
+ if ds is None:
print "Couldn't load the specified parameter file."
sys.exit(1)
- # Now we modify our compare name and self name to include the pf.
+ # Now we modify our compare name and self name to include the ds.
compare_id = opts.compare_name
watcher = None
if compare_id is not None:
- compare_id += "_%s_%s" % (pf, pf._hash())
+ compare_id += "_%s_%s" % (ds, ds._hash())
watcher = Xunit()
- this_id = opts.this_name + "_%s_%s" % (pf, pf._hash())
+ this_id = opts.this_name + "_%s_%s" % (ds, ds._hash())
rtr = RegressionTestRunner(this_id, compare_id,
results_path=opts.storage_dir,
diff -r f20d58ca2848 -r 67507b4f8da9 tests/volume_rendering.py
--- a/tests/volume_rendering.py Sun Jun 15 07:46:08 2014 -0500
+++ b/tests/volume_rendering.py Sun Jun 15 19:50:51 2014 -0700
@@ -14,14 +14,14 @@
name = "volume_rendering_consistency"
def run(self):
- c = (self.pf.domain_right_edge + self.pf.domain_left_edge) / 2.
- W = na.sqrt(3.) * (self.pf.domain_right_edge - \
- self.pf.domain_left_edge)
+ c = (self.ds.domain_right_edge + self.ds.domain_left_edge) / 2.
+ W = na.sqrt(3.) * (self.ds.domain_right_edge - \
+ self.ds.domain_left_edge)
N = 512
n_contours = 5
cmap = 'algae'
field = 'Density'
- mi, ma = self.pf.h.all_data().quantities['Extrema'](field)[0]
+ mi, ma = self.ds.h.all_data().quantities['Extrema'](field)[0]
mi, ma = na.log10(mi), na.log10(ma)
contour_width = (ma - mi) / 100.
L = na.array([1.] * 3)
@@ -29,7 +29,7 @@
tf.add_layers(n_contours, w=contour_width,
col_bounds=(mi * 1.001, ma * 0.999),
colormap=cmap, alpha=na.logspace(-1, 0, n_contours))
- cam = self.pf.h.camera(c, L, W, (N, N), transfer_function=tf,
+ cam = self.ds.h.camera(c, L, W, (N, N), transfer_function=tf,
no_ghost=True)
image = cam.snapshot()
# image = cam.snapshot('test_rendering_%s.png'%field)
diff -r f20d58ca2848 -r 67507b4f8da9 tr.py
--- a/tr.py Sun Jun 15 07:46:08 2014 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,8 +0,0 @@
-from yt.testing import *
-pf = fake_random_pf(64)
-p1 = [0.1, 0.2, 0.3]
-p2 = [0.8, 0.1, 0.4]
-
-ray = pf.ray(p1, p2)
-dd = ray["dts"].sum()
-print dd
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/coordinate_transformation/transforms.py
--- a/yt/analysis_modules/coordinate_transformation/transforms.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/coordinate_transformation/transforms.py Sun Jun 15 19:50:51 2014 -0700
@@ -19,10 +19,10 @@
from yt.utilities.linear_interpolators import \
TrilinearFieldInterpolator
-def spherical_regrid(pf, nr, ntheta, nphi, rmax, fields,
+def spherical_regrid(ds, nr, ntheta, nphi, rmax, fields,
center=None, smoothed=True):
"""
- This function takes a parameter file (*pf*) along with the *nr*, *ntheta*
+ This function takes a dataset (*ds*) along with the *nr*, *ntheta*
and *nphi* points to generate out to *rmax*, and it grids *fields* onto
those points and returns a dict. *center* if supplied will be the center,
otherwise the most dense point will be chosen. *smoothed* governs whether
@@ -30,7 +30,7 @@
"""
mylog.warning("This code may produce some artifacts of interpolation")
mylog.warning("See yt/extensions/coordinate_transforms.py for plotting information")
- if center is None: center = pf.h.find_max("Density")[1]
+ if center is None: center = ds.find_max("Density")[1]
fields = ensure_list(fields)
r,theta,phi = np.mgrid[0:rmax:nr*1j,
0:np.pi:ntheta*1j,
@@ -39,7 +39,7 @@
new_grid['x'] = r*np.sin(theta)*np.cos(phi) + center[0]
new_grid['y'] = r*np.sin(theta)*np.sin(phi) + center[1]
new_grid['z'] = r*np.cos(theta) + center[2]
- sphere = pf.sphere(center, rmax)
+ sphere = ds.sphere(center, rmax)
return arbitrary_regrid(new_grid, sphere, fields, smoothed)
def arbitrary_regrid(new_grid, data_source, fields, smoothed=True):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/cosmological_observation/cosmology_splice.py
--- a/yt/analysis_modules/cosmological_observation/cosmology_splice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/cosmological_observation/cosmology_splice.py Sun Jun 15 19:50:51 2014 -0700
@@ -137,7 +137,7 @@
# For first data dump, choose closest to desired redshift.
if (len(cosmology_splice) == 0):
- # Sort data outputs by proximity to current redsfhit.
+ # Sort data outputs by proximity to current redshift.
self.splice_outputs.sort(key=lambda obj:np.fabs(z - \
obj['redshift']))
cosmology_splice.append(self.splice_outputs[0])
@@ -230,7 +230,7 @@
ensure continuity of the splice. Default: 3.
filename : string
If provided, a file will be written with the redshift outputs in
- the form in which they should be given in the enzo parameter file.
+ the form in which they should be given in the enzo dataset.
Default: None.
start_index : int
The index of the first redshift output. Default: 0.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py Sun Jun 15 19:50:51 2014 -0700
@@ -82,11 +82,11 @@
datasets for time series.
Default: True.
find_outputs : bool
- Whether or not to search for parameter files in the current
+ Whether or not to search for datasets in the current
directory.
Default: False.
set_parameters : dict
- Dictionary of parameters to attach to pf.parameters.
+ Dictionary of parameters to attach to ds.parameters.
Default: None.
output_dir : string
The directory in which images and data files will be written.
@@ -525,7 +525,7 @@
# Get rid of old halo mask, if one was there.
self.halo_mask = []
- # Clean pf objects out of light cone solution.
+ # Clean ds objects out of light cone solution.
for my_slice in self.light_cone_solution:
if my_slice.has_key('object'):
del my_slice['object']
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py Sun Jun 15 19:50:51 2014 -0700
@@ -47,7 +47,7 @@
Parameters
----------
parameter_filename : string
- The simulation parameter file.
+ The simulation dataset.
simulation_type : string
The simulation type.
near_redshift : float
@@ -83,7 +83,7 @@
datasets for time series.
Default: True.
find_outputs : bool
- Whether or not to search for parameter files in the current
+ Whether or not to search for datasets in the current
directory.
Default: False.
@@ -388,10 +388,10 @@
njobs=njobs, dynamic=dynamic):
# Load dataset for segment.
- pf = load(my_segment['filename'])
+ ds = load(my_segment['filename'])
if self.near_redshift == self.far_redshift:
- h_vel = cm_per_km * pf.units['mpc'] * \
+ h_vel = cm_per_km * ds.units['mpc'] * \
vector_length(my_segment['start'], my_segment['end']) * \
self.cosmology.HubbleConstantNow * \
self.cosmology.ExpansionFactor(my_segment['redshift'])
@@ -419,7 +419,7 @@
for sub_segment in sub_segments:
mylog.info("Getting subsegment: %s to %s." %
(list(sub_segment[0]), list(sub_segment[1])))
- sub_ray = pf.ray(sub_segment[0], sub_segment[1])
+ sub_ray = ds.ray(sub_segment[0], sub_segment[1])
sub_data['dl'] = np.concatenate([sub_data['dl'],
(sub_ray['dts'] *
vector_length(sub_segment[0],
@@ -451,12 +451,12 @@
# Calculate distance to nearest object on halo list for each lixel.
if get_nearest_halo:
- halo_list = self._get_halo_list(pf, fields=nearest_halo_fields,
+ halo_list = self._get_halo_list(ds, fields=nearest_halo_fields,
filename=halo_list_file,
**halo_profiler_parameters)
sub_data.update(self._get_nearest_halo_properties(sub_data, halo_list,
fields=nearest_halo_fields))
- sub_data['nearest_halo'] *= pf.units['mpccm']
+ sub_data['nearest_halo'] *= ds.units['mpccm']
# Remove empty lixels.
sub_dl_nonzero = sub_data['dl'].nonzero()
@@ -465,13 +465,13 @@
del sub_dl_nonzero
# Convert dl to cm.
- sub_data['dl'] *= pf.units['cm']
+ sub_data['dl'] *= ds.units['cm']
# Add to storage.
my_storage.result = sub_data
- pf.h.clear_all_data()
- del pf
+ ds.index.clear_all_data()
+ del ds
# Reconstruct ray data from parallel_objects storage.
all_data = [my_data for my_data in all_ray_storage.values()]
@@ -488,20 +488,20 @@
self._data = all_data
return all_data
- def _get_halo_list(self, pf, fields=None, filename=None,
+ def _get_halo_list(self, ds, fields=None, filename=None,
halo_profiler_kwargs=None,
halo_profiler_actions=None, halo_list='all'):
- "Load a list of halos for the pf."
+ "Load a list of halos for the ds."
- if str(pf) in self.halo_lists:
- return self.halo_lists[str(pf)]
+ if str(ds) in self.halo_lists:
+ return self.halo_lists[str(ds)]
if fields is None: fields = []
if filename is not None and \
- os.path.exists(os.path.join(pf.fullpath, filename)):
+ os.path.exists(os.path.join(ds.fullpath, filename)):
- my_filename = os.path.join(pf.fullpath, filename)
+ my_filename = os.path.join(ds.fullpath, filename)
mylog.info("Loading halo list from %s." % my_filename)
my_list = {}
in_file = h5py.File(my_filename, 'r')
@@ -510,15 +510,15 @@
in_file.close()
else:
- my_list = self._halo_profiler_list(pf, fields=fields,
+ my_list = self._halo_profiler_list(ds, fields=fields,
halo_profiler_kwargs=halo_profiler_kwargs,
halo_profiler_actions=halo_profiler_actions,
halo_list=halo_list)
- self.halo_lists[str(pf)] = my_list
- return self.halo_lists[str(pf)]
+ self.halo_lists[str(ds)] = my_list
+ return self.halo_lists[str(ds)]
- def _halo_profiler_list(self, pf, fields=None,
+ def _halo_profiler_list(self, ds, fields=None,
halo_profiler_kwargs=None,
halo_profiler_actions=None, halo_list='all'):
"Run the HaloProfiler to get the halo list."
@@ -526,7 +526,7 @@
if halo_profiler_kwargs is None: halo_profiler_kwargs = {}
if halo_profiler_actions is None: halo_profiler_actions = []
- hp = HaloProfiler(pf, **halo_profiler_kwargs)
+ hp = HaloProfiler(ds, **halo_profiler_kwargs)
for action in halo_profiler_actions:
if not action.has_key('args'): action['args'] = ()
if not action.has_key('kwargs'): action['kwargs'] = {}
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py Sun Jun 15 19:50:51 2014 -0700
@@ -78,16 +78,16 @@
"""
- dpf = halo.halo_catalog.data_pf
- hpf = halo.halo_catalog.halos_pf
- center = dpf.arr([halo.quantities["particle_position_%s" % axis] \
+ dds = halo.halo_catalog.data_ds
+ hds = halo.halo_catalog.halos_ds
+ center = dds.arr([halo.quantities["particle_position_%s" % axis] \
for axis in "xyz"])
radius = factor * halo.quantities[radius_field]
if radius <= 0.0:
halo.data_object = None
return
try:
- sphere = dpf.sphere(center, radius)
+ sphere = dds.sphere(center, radius)
except YTSphereTooSmall:
halo.data_object = None
return
@@ -116,11 +116,11 @@
"""
if halo.data_object is None: return
- s_pf = halo.data_object.pf
+ s_ds = halo.data_object.ds
old_sphere = halo.data_object
max_vals = old_sphere.quantities.max_location(field)
- new_center = s_pf.arr(max_vals[2:])
- new_sphere = s_pf.sphere(new_center.in_units("code_length"),
+ new_center = s_ds.arr(max_vals[2:])
+ new_sphere = s_ds.sphere(new_center.in_units("code_length"),
old_sphere.radius.in_units("code_length"))
mylog.info("Moving sphere center from %s to %s." % (old_sphere.center,
new_sphere.center))
@@ -192,10 +192,10 @@
mylog.info("Calculating 1D profile for halo %d." %
halo.quantities["particle_identifier"])
- dpf = halo.halo_catalog.data_pf
+ dds = halo.halo_catalog.data_ds
- if dpf is None:
- raise RuntimeError("Profile callback requires a data pf.")
+ if dds is None:
+ raise RuntimeError("Profile callback requires a data ds.")
if not hasattr(halo, "data_object"):
raise RuntimeError("Profile callback requires a data container.")
@@ -219,8 +219,8 @@
# temporary fix since profiles do not have units at the moment
for field in my_profile.field_data:
- my_profile.field_data[field] = dpf.arr(my_profile[field],
- dpf.field_info[field].units)
+ my_profile.field_data[field] = dds.arr(my_profile[field],
+ dds.field_info[field].units)
# accumulate, if necessary
if accumulation:
@@ -350,7 +350,7 @@
if "units" in out_file[field].attrs:
units = out_file[field].attrs["units"]
if units == "dimensionless": units = ""
- my_profile[field] = halo.halo_catalog.halos_pf.arr(out_file[field].value, units)
+ my_profile[field] = halo.halo_catalog.halos_ds.arr(out_file[field].value, units)
out_file.close()
setattr(halo, storage, my_profile)
@@ -383,7 +383,7 @@
fields = ensure_list(fields)
- dpf = halo.halo_catalog.data_pf
+ dds = halo.halo_catalog.data_ds
profile_data = getattr(halo, profile_storage)
if ("gas", "overdensity") not in profile_data:
@@ -403,7 +403,7 @@
if v_field not in halo.halo_catalog.quantities:
halo.halo_catalog.quantities.append(v_field)
vquantities = dict([("%s_%d" % (v_fields[field], critical_overdensity),
- dpf.quan(0, profile_data[field].units)) \
+ dds.quan(0, profile_data[field].units)) \
for field in fields])
if dfilter.sum() < 2:
@@ -438,7 +438,7 @@
v_prof = profile_data[field][dfilter].to_ndarray()
slope = np.log(v_prof[index + 1] / v_prof[index]) / \
np.log(vod[index + 1] / vod[index])
- value = dpf.quan(np.exp(slope * np.log(critical_overdensity /
+ value = dds.quan(np.exp(slope * np.log(critical_overdensity /
vod[index])) * v_prof[index],
profile_data[field].units).in_cgs()
vquantities["%s_%d" % (v_fields[field], critical_overdensity)] = value
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py Sun Jun 15 19:50:51 2014 -0700
@@ -47,15 +47,15 @@
Parameters
----------
- halos_pf : str
+ halos_ds : str
Dataset created by a halo finder. If None, a halo finder should be
provided with the finder_method keyword.
- data_pf : str
+ data_ds : str
Dataset created by a simulation.
data_source : data container
- Data container associated with either the halos_pf or the data_pf.
+ Data container associated with either the halos_ds or the data_ds.
finder_method : str
- Halo finder to be used if no halos_pf is given.
+ Halo finder to be used if no halos_ds is given.
output_dir : str
The top level directory into which analysis output will be written.
Default: "."
@@ -66,10 +66,10 @@
# create profiles or overdensity vs. radius for each halo and save to disk
>>> from yt.mods import *
>>> from yt.analysis_modules.halo_analysis.api import *
- >>> data_pf = load("DD0064/DD0064")
- >>> halos_pf = load("rockstar_halos/halos_64.0.bin",
+ >>> data_ds = load("DD0064/DD0064")
+ >>> halos_ds = load("rockstar_halos/halos_64.0.bin",
... output_dir="halo_catalogs/catalog_0064")
- >>> hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf)
+ >>> hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)
# filter out halos with mass < 1e13 Msun
>>> hc.add_filter("quantity_value", "particle_mass", ">", 1e13, "Msun")
# create a sphere object with radius of 2 times the virial_radius field
@@ -84,8 +84,8 @@
# load in the saved halo catalog and all the profile data
- >>> halos_pf = load("halo_catalogs/catalog_0064/catalog_0064.0.h5")
- >>> hc = HaloCatalog(halos_pf=halos_pf,
+ >>> halos_ds = load("halo_catalogs/catalog_0064/catalog_0064.0.h5")
+ >>> hc = HaloCatalog(halos_ds=halos_ds,
output_dir="halo_catalogs/catalog_0064")
>>> hc.add_callback("load_profiles", output_dir="profiles")
>>> hc.load()
@@ -96,29 +96,29 @@
"""
- def __init__(self, halos_pf=None, data_pf=None,
+ def __init__(self, halos_ds=None, data_ds=None,
data_source=None, finder_method=None,
output_dir="halo_catalogs/catalog"):
ParallelAnalysisInterface.__init__(self)
- self.halos_pf = halos_pf
- self.data_pf = data_pf
+ self.halos_ds = halos_ds
+ self.data_ds = data_ds
self.output_dir = ensure_dir(output_dir)
if os.path.basename(self.output_dir) != ".":
self.output_prefix = os.path.basename(self.output_dir)
else:
self.output_prefix = "catalog"
- if halos_pf is None:
- if data_pf is None:
- raise RuntimeError("Must specify a halos_pf, data_pf, or both.")
+ if halos_ds is None:
+ if data_ds is None:
+ raise RuntimeError("Must specify a halos_ds, data_ds, or both.")
if finder_method is None:
- raise RuntimeError("Must specify a halos_pf or a finder_method.")
+ raise RuntimeError("Must specify a halos_ds or a finder_method.")
if data_source is None:
- if halos_pf is not None:
- data_source = halos_pf.h.all_data()
+ if halos_ds is not None:
+ data_source = halos_ds.all_data()
else:
- data_source = data_pf.h.all_data()
+ data_source = data_ds.all_data()
self.data_source = data_source
if finder_method is not None:
@@ -129,7 +129,7 @@
self.actions = []
# fields to be written to the halo catalog
self.quantities = []
- if not self.halos_pf is None:
+ if not self.halos_ds is None:
self.add_default_quantities()
def add_callback(self, callback, *args, **kwargs):
@@ -205,7 +205,7 @@
field_type = None
if field_type is None:
quantity = quantity_registry.find(key, *args, **kwargs)
- elif (field_type, key) in self.halos_pf.field_info:
+ elif (field_type, key) in self.halos_ds.field_info:
quantity = (field_type, key)
else:
raise RuntimeError("HaloCatalog quantity must be a registered function or a field of a known type.")
@@ -348,20 +348,20 @@
self.catalog = []
if save_halos: self.halo_list = []
- if self.halos_pf is None:
+ if self.halos_ds is None:
# Find the halos and make a dataset of them
- self.halos_pf = self.finder_method(self.data_pf)
- if self.halos_pf is None:
+ self.halos_ds = self.finder_method(self.data_ds)
+ if self.halos_ds is None:
mylog.warning('No halos were found for {0}'.format(\
- self.data_pf.basename))
+ self.data_ds.basename))
if save_catalog:
- self.halos_pf = self.data_pf
+ self.halos_ds = self.data_ds
self.save_catalog()
- self.halos_pf = None
+ self.halos_ds = None
return
- # Assign pf and data sources appropriately
- self.data_source = self.halos_pf.all_data()
+ # Assign ds and data sources appropriately
+ self.data_source = self.halos_ds.all_data()
# Add all of the default quantities that all halos must have
self.add_default_quantities('all')
@@ -378,7 +378,7 @@
if not halo_filter: break
elif action_type == "quantity":
key, quantity = action
- if quantity in self.halos_pf.field_info:
+ if quantity in self.halos_ds.field_info:
new_halo.quantities[key] = \
self.data_source[quantity][int(i)].in_cgs()
elif callable(quantity):
@@ -412,9 +412,9 @@
"domain_dimensions",
"cosmological_simulation", "omega_lambda",
"omega_matter", "hubble_constant"]:
- out_file.attrs[attr] = getattr(self.halos_pf, attr)
+ out_file.attrs[attr] = getattr(self.halos_ds, attr)
for attr in ["domain_left_edge", "domain_right_edge"]:
- out_file.attrs[attr] = getattr(self.halos_pf, attr).in_cgs()
+ out_file.attrs[attr] = getattr(self.halos_ds, attr).in_cgs()
out_file.attrs["data_type"] = "halo_catalog"
out_file.attrs["num_halos"] = n_halos
if n_halos > 0:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_analysis/halo_finding_methods.py
--- a/yt/analysis_modules/halo_analysis/halo_finding_methods.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_analysis/halo_finding_methods.py Sun Jun 15 19:50:51 2014 -0700
@@ -44,27 +44,27 @@
def __call__(self, ds):
return self.function(ds, *self.args, **self.kwargs)
-def _hop_method(pf):
+def _hop_method(ds):
r"""
Run the Hop halo finding method.
"""
- halo_list = HOPHaloFinder(pf)
- halos_pf = _parse_old_halo_list(pf, halo_list)
- return halos_pf
+ halo_list = HOPHaloFinder(ds)
+ halos_ds = _parse_old_halo_list(ds, halo_list)
+ return halos_ds
add_finding_method("hop", _hop_method)
-def _fof_method(pf):
+def _fof_method(ds):
r"""
Run the FoF halo finding method.
"""
- halo_list = FOFHaloFinder(pf)
- halos_pf = _parse_old_halo_list(pf, halo_list)
- return halos_pf
+ halo_list = FOFHaloFinder(ds)
+ halos_ds = _parse_old_halo_list(ds, halo_list)
+ return halos_ds
add_finding_method("fof", _fof_method)
-def _rockstar_method(pf):
+def _rockstar_method(ds):
r"""
Run the Rockstar halo finding method.
"""
@@ -74,20 +74,20 @@
from yt.analysis_modules.halo_finding.rockstar.api import \
RockstarHaloFinder
- rh = RockstarHaloFinder(pf)
+ rh = RockstarHaloFinder(ds)
rh.run()
- halos_pf = RockstarDataset("rockstar_halos/halos_0.0.bin")
+ halos_ds = RockstarDataset("rockstar_halos/halos_0.0.bin")
try:
- halos_pf.create_field_info()
+ halos_ds.create_field_info()
except ValueError:
return None
- return halos_pf
+ return halos_ds
add_finding_method("rockstar", _rockstar_method)
-def _parse_old_halo_list(data_pf, halo_list):
+def _parse_old_halo_list(data_ds, halo_list):
r"""
Convert the halo list into a loaded dataset.
"""
@@ -119,23 +119,23 @@
halo_properties['particle_position_y'][0][i] = com[1]
halo_properties['particle_position_z'][0][i] = com[2]
- # Define a bounding box based on original data pf
- bbox = np.array([data_pf.domain_left_edge.in_cgs(),
- data_pf.domain_right_edge.in_cgs()]).T
+ # Define a bounding box based on original data ds
+ bbox = np.array([data_ds.domain_left_edge.in_cgs(),
+ data_ds.domain_right_edge.in_cgs()]).T
- # Create a pf with the halos as particles
- particle_pf = load_particles(halo_properties,
+ # Create a ds with the halos as particles
+ particle_ds = load_particles(halo_properties,
bbox=bbox, length_unit = 1, mass_unit=1)
# Create the field info dictionary so we can reference those fields
- particle_pf.create_field_info()
+ particle_ds.create_field_info()
for attr in ["current_redshift", "current_time",
"domain_dimensions",
"cosmological_simulation", "omega_lambda",
"omega_matter", "hubble_constant"]:
- attr_val = getattr(data_pf, attr)
- setattr(particle_pf, attr, attr_val)
- particle_pf.current_time = particle_pf.current_time.in_cgs()
+ attr_val = getattr(data_ds, attr)
+ setattr(particle_ds, attr, attr_val)
+ particle_ds.current_time = particle_ds.current_time.in_cgs()
- return particle_pf
+ return particle_ds
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_finding/halo_objects.py Sun Jun 15 19:50:51 2014 -0700
@@ -72,9 +72,9 @@
self._max_dens = halo_list._max_dens
self.id = id
self.data = halo_list._data_source
- self.pf = self.data.pf
- self.gridsize = (self.pf.domain_right_edge - \
- self.pf.domain_left_edge)
+ self.ds = self.data.ds
+ self.gridsize = (self.ds.domain_right_edge - \
+ self.ds.domain_left_edge)
if indices is not None:
self.indices = halo_list._base_indices[indices]
else:
@@ -110,11 +110,11 @@
pid = self.__getitem__('particle_index')
# This is from the sphere.
if self._name == "RockstarHalo":
- ds = self.pf.sphere(self.CoM, self._radjust * self.max_radius)
+ ds = self.ds.sphere(self.CoM, self._radjust * self.max_radius)
elif self._name == "LoadedHalo":
- ds = self.pf.sphere(self.CoM, np.maximum(self._radjust * \
- self.pf.quan(self.max_radius, 'code_length'), \
- self.pf.index.get_smallest_dx()))
+ ds = self.ds.sphere(self.CoM, np.maximum(self._radjust * \
+ self.ds.quan(self.max_radius, 'code_length'), \
+ self.ds.index.get_smallest_dx()))
sp_pid = ds['particle_index']
self._ds_sort = sp_pid.argsort()
sp_pid = sp_pid[self._ds_sort]
@@ -136,9 +136,9 @@
pm = self["particle_mass"].in_units('Msun')
c = {}
# We shift into a box where the origin is the left edge
- c[0] = self["particle_position_x"] - self.pf.domain_left_edge[0]
- c[1] = self["particle_position_y"] - self.pf.domain_left_edge[1]
- c[2] = self["particle_position_z"] - self.pf.domain_left_edge[2]
+ c[0] = self["particle_position_x"] - self.ds.domain_left_edge[0]
+ c[1] = self["particle_position_y"] - self.ds.domain_left_edge[1]
+ c[2] = self["particle_position_z"] - self.ds.domain_left_edge[2]
com = []
for i in range(3):
# A halo is likely periodic around a boundary if the distance
@@ -147,18 +147,18 @@
# So skip the rest if the converse is true.
# Note we might make a change here when periodicity-handling is
# fully implemented.
- if (c[i].max() - c[i].min()) < (self.pf.domain_width[i] / 2.):
+ if (c[i].max() - c[i].min()) < (self.ds.domain_width[i] / 2.):
com.append(c[i])
continue
# Now we want to flip around only those close to the left boundary.
- sel = (c[i] <= (self.pf.domain_width[i]/2))
- c[i][sel] += self.pf.domain_width[i]
+ sel = (c[i] <= (self.ds.domain_width[i]/2))
+ c[i][sel] += self.ds.domain_width[i]
com.append(c[i])
c = (com * pm).sum(axis=1) / pm.sum()
- c = self.pf.arr(c, 'code_length')
+ c = self.ds.arr(c, 'code_length')
- return c%self.pf.domain_width + self.pf.domain_left_edge
+ return c%self.ds.domain_width + self.ds.domain_left_edge
def maximum_density(self):
r"""Return the HOP-identified maximum density. Not applicable to
@@ -221,7 +221,7 @@
vx = (self["particle_velocity_x"] * pm).sum()
vy = (self["particle_velocity_y"] * pm).sum()
vz = (self["particle_velocity_z"] * pm).sum()
- return self.pf.arr([vx, vy, vz], vx.units) / pm.sum()
+ return self.ds.arr([vx, vy, vz], vx.units) / pm.sum()
def rms_velocity(self):
r"""Returns the mass-weighted RMS velocity for the halo
@@ -277,7 +277,7 @@
rx = np.abs(self["particle_position_x"] - center[0])
ry = np.abs(self["particle_position_y"] - center[1])
rz = np.abs(self["particle_position_z"] - center[2])
- DW = self.data.pf.domain_right_edge - self.data.pf.domain_left_edge
+ DW = self.data.ds.domain_right_edge - self.data.ds.domain_left_edge
r = np.sqrt(np.minimum(rx, DW[0] - rx) ** 2.0
+ np.minimum(ry, DW[1] - ry) ** 2.0
+ np.minimum(rz, DW[2] - rz) ** 2.0)
@@ -339,7 +339,7 @@
handle.create_dataset("/%s/%s" % (gn, field), data=self[field])
handle.create_dataset("/%s/particle_mass" % gn,
data=self["particle_mass"].in_units('Msun'))
- if ('io','creation_time') in self.data.pf.field_list:
+ if ('io','creation_time') in self.data.ds.field_list:
handle.create_dataset("/%s/creation_time" % gn,
data=self['creation_time'])
n = handle["/%s" % gn]
@@ -438,11 +438,11 @@
return None
self.bin_count = bins
# Cosmology
- h = self.pf.hubble_constant
- Om_matter = self.pf.omega_matter
- z = self.pf.current_redshift
- period = self.pf.domain_right_edge - \
- self.pf.domain_left_edge
+ h = self.ds.hubble_constant
+ Om_matter = self.ds.omega_matter
+ z = self.ds.current_redshift
+ period = self.ds.domain_right_edge - \
+ self.ds.domain_left_edge
thissize = self.get_size()
rho_crit = rho_crit_g_cm3_h2 * h ** 2.0 * Om_matter # g cm^-3
Msun2g = mass_sun_cgs
@@ -463,7 +463,7 @@
# Multiply min and max to prevent issues with digitize below.
self.radial_bins = np.logspace(math.log10(min(dist) * .99 + TINY),
math.log10(max(dist) * 1.01 + 2 * TINY), num=self.bin_count + 1)
- self.radial_bins = self.pf.arr(self.radial_bins,'code_length')
+ self.radial_bins = self.ds.arr(self.radial_bins,'code_length')
# Find out which bin each particle goes into, and add the particle
# mass to that bin.
inds = np.digitize(dist, self.radial_bins) - 1
@@ -614,7 +614,7 @@
# We won't store this field below in saved_fields because
# that would mean keeping two copies of it, one in the yt
# machinery and one here.
- ds = self.pf.sphere(self.CoM, 4 * self.max_radius)
+ ds = self.ds.sphere(self.CoM, 4 * self.max_radius)
return np.take(ds[key][self._ds_sort], self.particle_mask)
def _get_particle_data(self, halo, fnames, size, field):
@@ -711,14 +711,14 @@
return None
# Do this for all because all will use it.
self.bin_count = bins
- period = self.data.pf.domain_right_edge - \
- self.data.pf.domain_left_edge
+ period = self.data.ds.domain_right_edge - \
+ self.data.ds.domain_left_edge
self.mass_bins = np.zeros(self.bin_count + 1, dtype='float64')
cen = self.center_of_mass()
# Cosmology
- h = self.data.pf.hubble_constant
- Om_matter = self.data.pf.omega_matter
- z = self.data.pf.current_redshift
+ h = self.data.ds.hubble_constant
+ Om_matter = self.data.ds.omega_matter
+ z = self.data.ds.current_redshift
rho_crit = rho_crit_g_cm3_h2 * h ** 2.0 * Om_matter # g cm^-3
Msun2g = mass_sun_cgs
rho_crit = rho_crit * ((1.0 + z) ** 3.0)
@@ -748,7 +748,7 @@
# Multiply min and max to prevent issues with digitize below.
self.radial_bins = np.logspace(math.log10(dist_min * .99 + TINY),
math.log10(dist_max * 1.01 + 2 * TINY), num=self.bin_count + 1)
- self.radial_bins = pf.arr(self.radial_bins, 'code_length')
+ self.radial_bins = ds.arr(self.radial_bins, 'code_length')
if self.indices is not None and self.indices.size > 1:
# Find out which bin each particle goes into, and add the particle
@@ -817,14 +817,14 @@
# See particle_mask
_radjust = 1.05
- def __init__(self, pf, id, size=None, CoM=None,
+ def __init__(self, ds, id, size=None, CoM=None,
max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
rms_vel=None, fnames=None, mag_A=None, mag_B=None, mag_C=None,
e1_vec=None, tilt=None, supp=None):
- self.pf = pf
- self.gridsize = (self.pf.domain_right_edge - \
- self.pf.domain_left_edge)
+ self.ds = ds
+ self.gridsize = (self.ds.domain_right_edge - \
+ self.ds.domain_left_edge)
self.id = id
self.size = size
self.CoM = CoM
@@ -877,18 +877,18 @@
self._pid_sort = field_data.argsort().argsort()
#convert to YTArray using the data from disk
if key == 'particle_mass':
- field_data = self.pf.arr(field_data, 'Msun')
+ field_data = self.ds.arr(field_data, 'Msun')
else:
- field_data = self.pf.arr(field_data,
- self.pf._get_field_info('unknown',key).units)
+ field_data = self.ds.arr(field_data,
+ self.ds._get_field_info('unknown',key).units)
self._saved_fields[key] = field_data
return self._saved_fields[key]
# We won't store this field below in saved_fields because
# that would mean keeping two copies of it, one in the yt
# machinery and one here.
- ds = self.pf.sphere(self.CoM, np.maximum(self._radjust * \
- self.pf.quan(self.max_radius, 'code_length'), \
- self.pf.index.get_smallest_dx()))
+ ds = self.ds.sphere(self.CoM, np.maximum(self._radjust * \
+ self.ds.quan(self.max_radius, 'code_length'), \
+ self.ds.index.get_smallest_dx()))
# If particle_mask hasn't been called once then _ds_sort won't have
# the proper values set yet
if self._particle_mask is None:
@@ -985,7 +985,7 @@
>>> ell = halos[0].get_ellipsoid()
"""
ep = self.get_ellipsoid_parameters()
- ell = self.pf.index.ellipsoid(ep[0], ep[1], ep[2], ep[3],
+ ell = self.ds.index.ellipsoid(ep[0], ep[1], ep[2], ep[3],
ep[4], ep[5])
return ell
@@ -1016,18 +1016,18 @@
"""
cen = self.center_of_mass()
r = self.maximum_radius()
- return self.pf.sphere(cen, r)
+ return self.ds.sphere(cen, r)
class TextHalo(LoadedHalo):
- def __init__(self, pf, id, size=None, CoM=None,
+ def __init__(self, ds, id, size=None, CoM=None,
max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
rms_vel=None, fnames=None, mag_A=None, mag_B=None, mag_C=None,
e1_vec=None, tilt=None, supp=None):
- self.pf = pf
- self.gridsize = (self.pf.domain_right_edge - \
- self.pf.domain_left_edge)
+ self.ds = ds
+ self.gridsize = (self.ds.domain_right_edge - \
+ self.ds.domain_left_edge)
self.id = id
self.size = size
self.CoM = CoM
@@ -1273,16 +1273,16 @@
# 4 and 8 byte values in the struct as to not overlap memory registers.
_tocleanup = ['padding2']
- def __init__(self, pf, out_list):
+ def __init__(self, ds, out_list):
ParallelAnalysisInterface.__init__(self)
mylog.info("Initializing Rockstar List")
self._data_source = None
self._groups = []
self._max_dens = -1
- self.pf = pf
- self.redshift = pf.current_redshift
+ self.ds = ds
+ self.redshift = ds.current_redshift
self.out_list = out_list
- self._data_source = pf.h.all_data()
+ self._data_source = ds.all_data()
mylog.info("Parsing Rockstar halo list")
self._parse_output()
mylog.info("Finished %s"%out_list)
@@ -1332,7 +1332,7 @@
Read the out_*.list text file produced
by Rockstar into memory."""
- pf = self.pf
+ ds = self.ds
# In order to read the binary data, we need to figure out which
# binary files belong to this output.
basedir = os.path.dirname(self.out_list)
@@ -1342,13 +1342,13 @@
fglob = path.join(basedir, 'halos_%d.*.bin' % n)
files = glob.glob(fglob)
halos = self._get_halos_binary(files)
- #Jc = mass_sun_cgs/ pf['mpchcm'] * 1e5
+ #Jc = mass_sun_cgs/ ds['mpchcm'] * 1e5
Jc = 1.0
- length = 1.0 / pf['mpchcm']
+ length = 1.0 / ds['mpchcm']
conv = dict(pos = np.array([length, length, length,
1, 1, 1]), # to unitary
- r=1.0/pf['kpchcm'], # to unitary
- rs=1.0/pf['kpchcm'], # to unitary
+ r=1.0/ds['kpchcm'], # to unitary
+ rs=1.0/ds['kpchcm'], # to unitary
)
#convert units
for name in self._halo_dt.names:
@@ -1454,9 +1454,9 @@
class LoadedHaloList(HaloList):
_name = "Loaded"
- def __init__(self, pf, basename):
+ def __init__(self, ds, basename):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self._groups = []
self.basename = basename
self._retrieve_halos()
@@ -1486,7 +1486,7 @@
rms_vel = float(line[14])
if len(line) == 15:
# No ellipsoid information
- self._groups.append(LoadedHalo(self.pf, halo, size = size,
+ self._groups.append(LoadedHalo(self.ds, halo, size = size,
CoM = CoM,
max_dens_point = max_dens_point,
group_total_mass = group_total_mass, max_radius = max_radius,
@@ -1501,7 +1501,7 @@
e1_vec2 = float(line[20])
e1_vec = np.array([e1_vec0, e1_vec1, e1_vec2])
tilt = float(line[21])
- self._groups.append(LoadedHalo(self.pf, halo, size = size,
+ self._groups.append(LoadedHalo(self.ds, halo, size = size,
CoM = CoM,
max_dens_point = max_dens_point,
group_total_mass = group_total_mass, max_radius = max_radius,
@@ -1533,9 +1533,9 @@
class TextHaloList(HaloList):
_name = "Text"
- def __init__(self, pf, fname, columns, comment):
+ def __init__(self, ds, fname, columns, comment):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self._groups = []
self._retrieve_halos(fname, columns, comment)
@@ -1562,7 +1562,7 @@
if key not in base_set:
val = float(line[columns[key]])
temp_dict[key] = val
- self._groups.append(TextHalo(self.pf, halo,
+ self._groups.append(TextHalo(self.ds, halo,
CoM = cen, max_radius = r, supp = temp_dict))
halo += 1
@@ -1815,10 +1815,10 @@
class GenericHaloFinder(HaloList, ParallelAnalysisInterface):
- def __init__(self, pf, ds, dm_only=True, padding=0.0):
+ def __init__(self, ds, ds, dm_only=True, padding=0.0):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
- self.index = pf.h
+ self.ds = ds
+ self.index = ds.index
self.center = (np.array(ds.right_edge) + np.array(ds.left_edge)) / 2.0
def _parse_halolist(self, threshold_adjustment):
@@ -1900,7 +1900,7 @@
# This only does periodicity. We do NOT want to deal with anything
# else. The only reason we even do periodicity is the
LE, RE = bounds
- dw = self.pf.domain_right_edge - self.pf.domain_left_edge
+ dw = self.ds.domain_right_edge - self.ds.domain_left_edge
for i, ax in enumerate('xyz'):
arr = self._data_source["particle_position_%s" % ax]
arr[arr < LE[i] - self.padding] += dw[i]
@@ -2029,8 +2029,8 @@
Parameters
----------
- pf : `Dataset`
- The parameter file on which halo finding will be conducted.
+ ds : `Dataset`
+ The dataset on which halo finding will be conducted.
threshold : float
The density threshold used when building halos. Default = 160.0.
dm_only : bool
@@ -2084,18 +2084,18 @@
Examples
-------
- >>> pf = load("RedshiftOutput0000")
- >>> halos = parallelHF(pf)
+ >>> ds = load("RedshiftOutput0000")
+ >>> halos = parallelHF(ds)
"""
- def __init__(self, pf, subvolume=None, threshold=160, dm_only=True, \
+ def __init__(self, ds, subvolume=None, threshold=160, dm_only=True, \
resize=True, rearrange=True,\
fancy_padding=True, safety=1.5, premerge=True, sample=0.03, \
total_mass=None, num_particles=None, tree='F'):
if subvolume is not None:
ds_LE = np.array(subvolume.left_edge)
ds_RE = np.array(subvolume.right_edge)
- self._data_source = pf.h.all_data()
- GenericHaloFinder.__init__(self, pf, self._data_source, dm_only,
+ self._data_source = ds.all_data()
+ GenericHaloFinder.__init__(self, ds, self._data_source, dm_only,
padding=0.0)
self.padding = 0.0
self.num_neighbors = 65
@@ -2104,7 +2104,7 @@
self.tree = tree
if self.tree != 'F' and self.tree != 'C':
mylog.error("No kD Tree specified!")
- period = pf.domain_right_edge - pf.domain_left_edge
+ period = ds.domain_right_edge - ds.domain_left_edge
topbounds = np.array([[0., 0., 0.], period])
# Cut up the volume evenly initially, with no padding.
padded, LE, RE, self._data_source = \
@@ -2141,7 +2141,7 @@
try:
l = self._data_source.right_edge - self._data_source.left_edge
except AttributeError:
- l = pf.domain_right_edge - pf.domain_left_edge
+ l = ds.domain_right_edge - ds.domain_left_edge
vol = l[0] * l[1] * l[2]
full_vol = vol
# We will use symmetric padding when a subvolume is being used.
@@ -2218,7 +2218,7 @@
np.zeros(3, dtype='float64'))
# If we're using a subvolume, we now re-divide.
if subvolume is not None:
- self._data_source = pf.region([0.] * 3, ds_LE, ds_RE)
+ self._data_source = ds.region([0.] * 3, ds_LE, ds_RE)
# Cut up the volume.
padded, LE, RE, self._data_source = \
self.partition_index_3d(ds=self._data_source,
@@ -2353,8 +2353,8 @@
Parameters
----------
- pf : `Dataset`
- The parameter file on which halo finding will be conducted.
+ ds : `Dataset`
+ The dataset on which halo finding will be conducted.
subvolume : `yt.data_objects.api.AMRData`, optional
A region over which HOP will be run, which can be used to run HOP
on a subvolume of the full volume. Default = None, which defaults
@@ -2383,17 +2383,17 @@
Examples
--------
- >>> pf = load("RedshiftOutput0000")
- >>> halos = HaloFinder(pf)
+ >>> ds = load("RedshiftOutput0000")
+ >>> halos = HaloFinder(ds)
"""
- def __init__(self, pf, subvolume=None, threshold=160, dm_only=True,
+ def __init__(self, ds, subvolume=None, threshold=160, dm_only=True,
padding=0.02, total_mass=None):
if subvolume is not None:
ds_LE = np.array(subvolume.left_edge)
ds_RE = np.array(subvolume.right_edge)
- self.period = pf.domain_right_edge - pf.domain_left_edge
- self._data_source = pf.h.all_data()
- GenericHaloFinder.__init__(self, pf, self._data_source, dm_only,
+ self.period = ds.domain_right_edge - ds.domain_left_edge
+ self._data_source = ds.all_data()
+ GenericHaloFinder.__init__(self, ds, self._data_source, dm_only,
padding)
# do it once with no padding so the total_mass is correct
# (no duplicated particles), and on the entire volume, even if only
@@ -2415,10 +2415,10 @@
# object representing the entire domain and sum it "lazily" with
# Derived Quantities.
if subvolume is not None:
- self._data_source = pf.region([0.] * 3, ds_LE, ds_RE)
+ self._data_source = ds.region([0.] * 3, ds_LE, ds_RE)
else:
- self._data_source = pf.h.all_data()
- self.padding = padding # * pf["unitary"] # This should be clevererer
+ self._data_source = ds.all_data()
+ self.padding = padding # * ds["unitary"] # This should be clevererer
padded, LE, RE, self._data_source = \
self.partition_index_3d(ds=self._data_source,
padding=self.padding)
@@ -2455,8 +2455,8 @@
Parameters
----------
- pf : `Dataset`
- The parameter file on which halo finding will be conducted.
+ ds : `Dataset`
+ The dataset on which halo finding will be conducted.
subvolume : `yt.data_objects.api.AMRData`, optional
A region over which HOP will be run, which can be used to run HOP
on a subvolume of the full volume. Default = None, which defaults
@@ -2477,22 +2477,22 @@
Examples
--------
- >>> pf = load("RedshiftOutput0000")
- >>> halos = FOFHaloFinder(pf)
+ >>> ds = load("RedshiftOutput0000")
+ >>> halos = FOFHaloFinder(ds)
"""
- def __init__(self, pf, subvolume=None, link=0.2, dm_only=True,
+ def __init__(self, ds, subvolume=None, link=0.2, dm_only=True,
padding=0.02):
if subvolume is not None:
ds_LE = np.array(subvolume.left_edge)
ds_RE = np.array(subvolume.right_edge)
- self.period = pf.domain_right_edge - pf.domain_left_edge
- self.pf = pf
- self.index = pf.h
- self.redshift = pf.current_redshift
- self._data_source = pf.h.all_data()
- GenericHaloFinder.__init__(self, pf, self._data_source, dm_only,
+ self.period = ds.domain_right_edge - ds.domain_left_edge
+ self.ds = ds
+ self.index = ds.index
+ self.redshift = ds.current_redshift
+ self._data_source = ds.all_data()
+ GenericHaloFinder.__init__(self, ds, self._data_source, dm_only,
padding)
- self.padding = 0.0 # * pf["unitary"] # This should be clevererer
+ self.padding = 0.0 # * ds["unitary"] # This should be clevererer
# get the total number of particles across all procs, with no padding
padded, LE, RE, self._data_source = \
self.partition_index_3d(ds=self._data_source,
@@ -2500,7 +2500,7 @@
if link > 0.0:
n_parts = self.comm.mpi_allreduce(self._data_source["particle_position_x"].size, op='sum')
# get the average spacing between particles
- #l = pf.domain_right_edge - pf.domain_left_edge
+ #l = ds.domain_right_edge - ds.domain_left_edge
#vol = l[0] * l[1] * l[2]
# Because we are now allowing for datasets with non 1-periodicity,
# but symmetric, vol is always 1.
@@ -2511,10 +2511,10 @@
linking_length = np.abs(link)
self.padding = padding
if subvolume is not None:
- self._data_source = pf.region([0.] * 3, ds_LE,
+ self._data_source = ds.region([0.] * 3, ds_LE,
ds_RE)
else:
- self._data_source = pf.h.all_data()
+ self._data_source = ds.all_data()
padded, LE, RE, self._data_source = \
self.partition_index_3d(ds=self._data_source,
padding=self.padding)
@@ -2550,12 +2550,12 @@
Examples
--------
- >>> pf = load("data0005")
- >>> halos = LoadHaloes(pf, "HopAnalysis")
+ >>> ds = load("data0005")
+ >>> halos = LoadHaloes(ds, "HopAnalysis")
"""
- def __init__(self, pf, basename):
+ def __init__(self, ds, basename):
self.basename = basename
- LoadedHaloList.__init__(self, pf, self.basename)
+ LoadedHaloList.__init__(self, ds, self.basename)
class LoadTextHaloes(GenericHaloFinder, TextHaloList):
r"""Load a text file of halos.
@@ -2583,15 +2583,15 @@
Examples
--------
- >>> pf = load("data0005")
- >>> halos = LoadTextHaloes(pf, "list.txt",
+ >>> ds = load("data0005")
+ >>> halos = LoadTextHaloes(ds, "list.txt",
{'x':0, 'y':1, 'z':2, 'r':3, 'm':4},
comment = ";")
>>> halos[0].supp['m']
3.28392048e14
"""
- def __init__(self, pf, filename, columns, comment = "#"):
- TextHaloList.__init__(self, pf, filename, columns, comment)
+ def __init__(self, ds, filename, columns, comment = "#"):
+ TextHaloList.__init__(self, ds, filename, columns, comment)
LoadTextHalos = LoadTextHaloes
@@ -2606,10 +2606,10 @@
Examples
--------
- >>> pf = load("data0005")
- >>> halos = LoadRockstarHalos(pf, "other_name.out")
+ >>> ds = load("data0005")
+ >>> halos = LoadRockstarHalos(ds, "other_name.out")
"""
- def __init__(self, pf, filename = None):
+ def __init__(self, ds, filename = None):
if filename is None:
filename = 'rockstar_halos/out_0.list'
- RockstarHaloList.__init__(self, pf, filename)
+ RockstarHaloList.__init__(self, ds, filename)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_finding/rockstar/rockstar.py
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar.py Sun Jun 15 19:50:51 2014 -0700
@@ -146,7 +146,7 @@
the width of the smallest grid element in the simulation from the
last data snapshot (i.e. the one where time has evolved the
longest) in the time series:
- ``pf_last.index.get_smallest_dx().in_units("Mpc/h")``.
+ ``ds_last.index.get_smallest_dx().in_units("Mpc/h")``.
total_particles : int
If supplied, this is a pre-calculated total number of particles present
in the simulation. For example, this is useful when analyzing a series
@@ -184,11 +184,11 @@
def _dm_filter(pfilter, data):
return data["creation_time"] <= 0.0
- def setup_pf(pf):
- pf.add_particle_filter("dark_matter")
+ def setup_ds(ds):
+ ds.add_particle_filter("dark_matter")
es = simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
- es.get_time_series(setup_function=setup_pf, redshift_data=False)
+ es.get_time_series(setup_function=setup_ds, redshift_data=False)
rh = RockstarHaloFinder(es, num_readers=1, num_writers=2,
particle_type="dark_matter")
@@ -221,10 +221,10 @@
self.particle_type = particle_type
self.outbase = outbase
if force_res is None:
- tpf = ts[-1] # Cache a reference
- self.force_res = tpf.index.get_smallest_dx().in_units("Mpc/h")
+ tds = ts[-1] # Cache a reference
+ self.force_res = tds.index.get_smallest_dx().in_units("Mpc/h")
# We have to delete now to wipe the index
- del tpf
+ del tds
else:
self.force_res = force_res
self.total_particles = total_particles
@@ -239,16 +239,16 @@
def _setup_parameters(self, ts):
if self.workgroup.name != "readers": return None
- tpf = ts[0]
+ tds = ts[0]
ptype = self.particle_type
- if ptype not in tpf.particle_types and ptype != 'all':
- has_particle_filter = tpf.add_particle_filter(ptype)
+ if ptype not in tds.particle_types and ptype != 'all':
+ has_particle_filter = tds.add_particle_filter(ptype)
if not has_particle_filter:
raise RuntimeError("Particle type (filter) %s not found." % (ptype))
- dd = tpf.h.all_data()
+ dd = tds.all_data()
# Get DM particle mass.
- all_fields = set(tpf.derived_field_list + tpf.field_list)
+ all_fields = set(tds.derived_field_list + tds.field_list)
has_particle_type = ("particle_type" in all_fields)
particle_mass = self.particle_mass
@@ -266,12 +266,12 @@
tp = dd.quantities.total_quantity((ptype, "particle_ones"))
p['total_particles'] = int(tp)
mylog.warning("Total Particle Count: %0.3e", int(tp))
- p['left_edge'] = tpf.domain_left_edge
- p['right_edge'] = tpf.domain_right_edge
- p['center'] = (tpf.domain_right_edge + tpf.domain_left_edge)/2.0
+ p['left_edge'] = tds.domain_left_edge
+ p['right_edge'] = tds.domain_right_edge
+ p['center'] = (tds.domain_right_edge + tds.domain_left_edge)/2.0
p['particle_mass'] = self.particle_mass = particle_mass
p['particle_mass'].convert_to_units("Msun / h")
- del tpf
+ del tds
return p
def __del__(self):
@@ -354,11 +354,11 @@
os.makedirs(self.outbase)
# Make a record of which dataset corresponds to which set of
# output files because it will be easy to lose this connection.
- fp = open(self.outbase + '/pfs.txt', 'w')
- fp.write("# pfname\tindex\n")
- for i, pf in enumerate(self.ts):
- pfloc = path.join(path.relpath(pf.fullpath), pf.basename)
- line = "%s\t%d\n" % (pfloc, i)
+ fp = open(self.outbase + '/datasets.txt', 'w')
+ fp.write("# dsname\tindex\n")
+ for i, ds in enumerate(self.ts):
+ dsloc = path.join(path.relpath(ds.fullpath), ds.basename)
+ line = "%s\t%d\n" % (dsloc, i)
fp.write(line)
fp.close()
# This barrier makes sure the directory exists before it might be used.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx Sun Jun 15 19:50:51 2014 -0700
@@ -171,12 +171,12 @@
cdef class RockstarGroupiesInterface:
- cdef public object pf
+ cdef public object ds
cdef public object fof
# For future use/consistency
- def __cinit__(self,pf):
- self.pf = pf
+ def __cinit__(self,ds):
+ self.ds = ds
def setup_rockstar(self,
particle_mass,
@@ -200,13 +200,13 @@
OUTPUT_FORMAT = "ASCII"
MIN_HALO_OUTPUT_SIZE=min_halo_size
- pf = self.pf
+ ds = self.ds
- h0 = pf.hubble_constant
- Ol = pf.omega_lambda
- Om = pf.omega_matter
+ h0 = ds.hubble_constant
+ Ol = ds.omega_lambda
+ Om = ds.omega_matter
- SCALE_NOW = 1.0/(pf.current_redshift+1.0)
+ SCALE_NOW = 1.0/(ds.current_redshift+1.0)
if not outbase =='None'.decode('UTF-8'):
#output directory. since we can't change the output filenames
@@ -216,7 +216,7 @@
PARTICLE_MASS = particle_mass.in_units('Msun/h')
PERIODIC = periodic
- BOX_SIZE = pf.domain_width.in_units('Mpccm/h')[0]
+ BOX_SIZE = ds.domain_width.in_units('Mpccm/h')[0]
# Set up the configuration options
setup_config()
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx Sun Jun 15 19:50:51 2014 -0700
@@ -167,7 +167,7 @@
pslice = <particleflat[:h.num_p]> (<particleflat *>hp)
parray = np.asarray(pslice)
for cb in rh.callbacks:
- cb(rh.pf, parray)
+ cb(rh.ds, parray)
# This is where we call our functions
cdef void rh_read_particles(char *filename, particle **p, np.int64_t *num_p):
@@ -177,21 +177,21 @@
cdef np.ndarray[np.float64_t, ndim=1] arr
cdef unsigned long long pi,fi,i
cdef np.int64_t local_parts = 0
- pf = rh.pf = rh.tsl.next()
+ ds = rh.ds = rh.tsl.next()
block = int(str(filename).rsplit(".")[-1])
n = rh.block_ratio
- SCALE_NOW = 1.0/(pf.current_redshift+1.0)
+ SCALE_NOW = 1.0/(ds.current_redshift+1.0)
# Now we want to grab data from only a subset of the grids for each reader.
- all_fields = set(pf.derived_field_list + pf.field_list)
+ all_fields = set(ds.derived_field_list + ds.field_list)
# First we need to find out how many this reader is going to read in
# if the number of readers > 1.
- dd = pf.h.all_data()
+ dd = ds.all_data()
# Add particle type filter if not defined
- if rh.particle_type not in pf.particle_types and rh.particle_type != 'all':
- pf.add_particle_filter(rh.particle_type)
+ if rh.particle_type not in ds.particle_types and rh.particle_type != 'all':
+ ds.add_particle_filter(rh.particle_type)
if NUM_BLOCKS > 1:
local_parts = 0
@@ -203,11 +203,11 @@
p[0] = <particle *> malloc(sizeof(particle) * local_parts)
- conv[0] = conv[1] = conv[2] = pf.length_unit.in_units("Mpccm/h")
- conv[3] = conv[4] = conv[5] = pf.velocity_unit.in_units("km/s")
- left_edge[0] = pf.domain_left_edge[0]
- left_edge[1] = pf.domain_left_edge[1]
- left_edge[2] = pf.domain_left_edge[2]
+ conv[0] = conv[1] = conv[2] = ds.length_unit.in_units("Mpccm/h")
+ conv[3] = conv[4] = conv[5] = ds.velocity_unit.in_units("km/s")
+ left_edge[0] = ds.domain_left_edge[0]
+ left_edge[1] = ds.domain_left_edge[1]
+ left_edge[2] = ds.domain_left_edge[2]
left_edge[3] = left_edge[4] = left_edge[5] = 0.0
pi = 0
fields = [ (rh.particle_type, f) for f in
@@ -231,15 +231,15 @@
fi += 1
pi += npart
num_p[0] = local_parts
- del pf._instantiated_hierarchy
- del pf
+ del ds._instantiated_hierarchy
+ del ds
cdef class RockstarInterface:
cdef public object data_source
cdef public object ts
cdef public object tsl
- cdef public object pf
+ cdef public object ds
cdef int rank
cdef int size
cdef public int block_ratio
@@ -294,11 +294,11 @@
self.block_ratio = block_ratio
self.particle_type = particle_type
- tpf = self.ts[0]
- h0 = tpf.hubble_constant
- Ol = tpf.omega_lambda
- Om = tpf.omega_matter
- SCALE_NOW = 1.0/(tpf.current_redshift+1.0)
+ tds = self.ts[0]
+ h0 = tds.hubble_constant
+ Ol = tds.omega_lambda
+ Om = tds.omega_matter
+ SCALE_NOW = 1.0/(tds.current_redshift+1.0)
if callbacks is None: callbacks = []
self.callbacks = callbacks
if not outbase =='None'.decode('UTF-8'):
@@ -308,8 +308,8 @@
PARTICLE_MASS = particle_mass
PERIODIC = periodic
- BOX_SIZE = (tpf.domain_right_edge[0] -
- tpf.domain_left_edge[0]).in_units("Mpccm/h")
+ BOX_SIZE = (tds.domain_right_edge[0] -
+ tds.domain_left_edge[0]).in_units("Mpccm/h")
setup_config()
rh = self
cdef LPG func = rh_read_particles
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_mass_function/halo_mass_function.py
--- a/yt/analysis_modules/halo_mass_function/halo_mass_function.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_mass_function/halo_mass_function.py Sun Jun 15 19:50:51 2014 -0700
@@ -70,12 +70,12 @@
:param mass_column (int): The column of halo_file that contains the
masses of the haloes. Default=4.
"""
- def __init__(self, pf, halo_file=None, omega_matter0=None, omega_lambda0=None,
+ def __init__(self, ds, halo_file=None, omega_matter0=None, omega_lambda0=None,
omega_baryon0=0.05, hubble0=None, sigma8input=0.86, primordial_index=1.0,
this_redshift=None, log_mass_min=None, log_mass_max=None, num_sigma_bins=360,
fitting_function=4, mass_column=5):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self.halo_file = halo_file
self.omega_matter0 = omega_matter0
self.omega_lambda0 = omega_lambda0
@@ -97,10 +97,10 @@
else:
# Make the fit using the same cosmological parameters as the dataset.
self.mode = 'haloes'
- self.omega_matter0 = self.pf.omega_matter
- self.omega_lambda0 = self.pf.omega_lambda
- self.hubble0 = self.pf.hubble_constant
- self.this_redshift = self.pf.current_redshift
+ self.omega_matter0 = self.ds.omega_matter
+ self.omega_lambda0 = self.ds.omega_lambda
+ self.hubble0 = self.ds.hubble_constant
+ self.this_redshift = self.ds.current_redshift
self.read_haloes()
if self.log_mass_min == None:
self.log_mass_min = math.log10(min(self.haloes))
@@ -207,7 +207,7 @@
dis[self.num_sigma_bins-i-3] += dis[self.num_sigma_bins-i-2]
if i == (self.num_sigma_bins - 3): break
- self.dis = dis / (self.pf.domain_width * self.pf.units["mpccm"]).prod()
+ self.dis = dis / (self.ds.domain_width * self.ds.units["mpccm"]).prod()
def sigmaM(self):
"""
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py Sun Jun 15 19:50:51 2014 -0700
@@ -137,8 +137,8 @@
Examples
--------
- pf = load("DD0000/DD0000")
- halo_list = FOFHaloFinder(pf)
+ ds = load("DD0000/DD0000")
+ halo_list = FOFHaloFinder(ds)
halo_list.write_out("FOF/groups_00000.txt")
halos_COM = parse_halo_catalog_internal()
"""
@@ -743,9 +743,9 @@
>>> # generates mass history plots for the 20 most massive halos at t_fin.
>>> ts = DatasetSeries.from_filenames("DD????/DD????")
>>> # long step--must run FOF on each DD, but saves outputs for later use
- >>> for pf in ts:
- ... halo_list = FOFHaloFinder(pf)
- ... i = int(pf.basename[2:])
+ >>> for ds in ts:
+ ... halo_list = FOFHaloFinder(ds)
+ ... i = int(ds.basename[2:])
... halo_list.write_out("FOF/groups_%05i.txt" % i)
... halo_list.write_particle_lists("FOF/particles_%05i" % i)
...
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_merger_tree/merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/merger_tree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_merger_tree/merger_tree.py Sun Jun 15 19:50:51 2014 -0700
@@ -233,9 +233,9 @@
def _run_halo_finder_add_to_db(self):
for cycle, file in enumerate(self.restart_files):
gc.collect()
- pf = load(file)
- self.zs[file] = pf.current_redshift
- self.period = pf.domain_right_edge - pf.domain_left_edge
+ ds = load(file)
+ self.zs[file] = ds.current_redshift
+ self.period = ds.domain_right_edge - ds.domain_left_edge
# If the halos are already found, skip this data step, unless
# refresh is True.
dir = os.path.dirname(file)
@@ -247,10 +247,10 @@
else:
# Run the halo finder.
if self.halo_finder_function == FOFHaloFinder:
- halos = self.halo_finder_function(pf,
+ halos = self.halo_finder_function(ds,
link=self.FOF_link_length, dm_only=self.dm_only)
else:
- halos = self.halo_finder_function(pf,
+ halos = self.halo_finder_function(ds,
threshold=self.halo_finder_threshold, dm_only=self.dm_only)
halos.write_out(os.path.join(dir, 'MergerHalos.out'))
halos.write_particle_lists(os.path.join(dir, 'MergerHalos'))
@@ -264,7 +264,7 @@
# checking the first halo.
continue_check = False
if self.comm.rank == 0:
- currt = pf.unique_identifier
+ currt = ds.unique_identifier
line = "SELECT GlobalHaloID from Halos where SnapHaloID=0\
and SnapCurrentTimeIdentifier=%d;" % currt
self.cursor.execute(line)
@@ -274,7 +274,7 @@
continue_check = self.comm.mpi_bcast(continue_check)
if continue_check:
continue
- red = pf.current_redshift
+ red = ds.current_redshift
# Read the halos off the disk using the Halo Profiler tools.
hp = HaloProfiler(file, halo_list_file='MergerHalos.out',
halo_list_format={'id':0, 'mass':1, 'numpart':2, 'center':[7, 8, 9], 'velocity':[10, 11, 12], 'r_max':13})
@@ -290,7 +290,7 @@
values = (None, currt, red, ID, halo['mass'], numpart,
halo['center'][0], halo['center'][1], halo['center'][2],
halo['velocity'][0], halo['velocity'][1], halo['velocity'][2],
- halo['r_max'] / pf['mpc'],
+ halo['r_max'] / ds['mpc'],
-1,0.,-1,0.,-1,0.,-1,0.,-1,0.)
# 23 question marks for 23 data columns.
line = ''
@@ -322,15 +322,15 @@
# list of children.
# First, read in the locations of the child halos.
- child_pf = load(childfile)
- child_t = child_pf.unique_identifier
+ child_ds = load(childfile)
+ child_t = child_ds.unique_identifier
if self.comm.rank == 0:
line = "SELECT SnapHaloID, CenMassX, CenMassY, CenMassZ FROM \
Halos WHERE SnapCurrentTimeIdentifier = %d" % child_t
self.cursor.execute(line)
mylog.info("Finding likely parents for z=%1.5f child halos." % \
- child_pf.current_redshift)
+ child_ds.current_redshift)
# Build the kdtree for the children by looping over the fetched rows.
# Normalize the points for use only within the kdtree.
@@ -343,8 +343,8 @@
kdtree = cKDTree(child_points, leafsize = 10)
# Find the parent points from the database.
- parent_pf = load(parentfile)
- parent_t = parent_pf.unique_identifier
+ parent_ds = load(parentfile)
+ parent_t = parent_ds.unique_identifier
if self.comm.rank == 0:
line = "SELECT SnapHaloID, CenMassX, CenMassY, CenMassZ FROM \
Halos WHERE SnapCurrentTimeIdentifier = %d" % parent_t
@@ -399,8 +399,8 @@
self.h5files = defaultdict(dict)
if not hasattr(self, 'names'):
self.names = defaultdict(set)
- file_pf = load(filename)
- currt = file_pf.unique_identifier
+ file_ds = load(filename)
+ currt = file_ds.unique_identifier
dir = os.path.dirname(filename)
h5txt = os.path.join(dir, 'MergerHalos.txt')
lines = file(h5txt)
@@ -417,13 +417,13 @@
# Given a parent and child snapshot, and a list of child candidates,
# compute what fraction of the parent halo goes to each of the children.
- parent_pf = load(parentfile)
- child_pf = load(childfile)
- parent_currt = parent_pf.unique_identifier
- child_currt = child_pf.unique_identifier
+ parent_ds = load(parentfile)
+ child_ds = load(childfile)
+ parent_currt = parent_ds.unique_identifier
+ child_currt = child_ds.unique_identifier
mylog.info("Computing fractional contribututions of particles to z=%1.5f halos." % \
- child_pf.current_redshift)
+ child_ds.current_redshift)
if last == None:
# First we're going to read in the particles, haloIDs and masses from
@@ -1039,9 +1039,9 @@
while result:
res = list(result)
pID = result[0]
- pfracs = res[1:6]
+ dsracs = res[1:6]
cIDs = res[6:11]
- for pair in zip(cIDs, pfracs):
+ for pair in zip(cIDs, dsracs):
if pair[1] <= self.link_min or pair[0] != halo:
continue
else:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py Sun Jun 15 19:50:51 2014 -0700
@@ -57,14 +57,14 @@
r"""Initialize a Halo Profiler object.
In order to run the halo profiler, the Halo Profiler object must be
- instantiated. At the minimum, the path to a parameter file
+ instantiated. At the minimum, the path to a dataset
must be provided as the first term.
Parameters
----------
dataset : string, required
- The path to the parameter file for the dataset to be analyzed.
+ The path to the dataset for the dataset to be analyzed.
output_dir : string, optional
If specified, all output will be put into this path instead of
in the dataset directories. Default: None.
@@ -272,25 +272,25 @@
# Create dataset object.
if isinstance(self.dataset, Dataset):
- self.pf = self.dataset
+ self.ds = self.dataset
else:
- self.pf = load(self.dataset)
- self.pf.h
+ self.ds = load(self.dataset)
+ self.ds.index
# Create output directories.
self.output_dir = output_dir
if output_dir is None:
- self.output_dir = self.pf.fullpath
+ self.output_dir = self.ds.fullpath
else:
self.__check_directory(output_dir)
- self.output_dir = os.path.join(output_dir, self.pf.directory)
+ self.output_dir = os.path.join(output_dir, self.ds.directory)
self.__check_directory(self.output_dir)
self.profile_output_dir = os.path.join(self.output_dir, profile_output_dir)
self.projection_output_dir = os.path.join(self.output_dir, projection_output_dir)
# Figure out what max radius to use for profiling.
if halo_radius is not None:
- self.halo_radius = halo_radius / self.pf[radius_units]
+ self.halo_radius = halo_radius / self.ds[radius_units]
elif self.halos is 'single' or not 'r_max' in self.halo_list_format:
self.halo_radius = 0.1
else:
@@ -298,10 +298,10 @@
# Get halo(s).
if self.halos is 'single':
- v, center = self.pf.h.find_max('Density')
+ v, center = self.ds.find_max('density')
singleHalo = {}
singleHalo['center'] = center
- singleHalo['r_max'] = self.halo_radius * self.pf.units['mpc']
+ singleHalo['r_max'] = self.halo_radius * self.ds.units['mpc']
singleHalo['id'] = 0
self.all_halos.append(singleHalo)
elif self.halos is 'multiple':
@@ -465,7 +465,7 @@
if not profile_format in extension_map:
mylog.error("Invalid profile_format: %s. Valid options are %s." %
(profile_format, ", ".join(extension_map.keys())))
- raise YTException(pf=self.pf)
+ raise YTException(ds=self.ds)
if len(self.all_halos) == 0:
mylog.error("Halo list is empty, returning.")
@@ -576,7 +576,7 @@
newProfile = profile is None
if newProfile:
- r_min = 2 * self.pf.index.get_smallest_dx() * self.pf['mpc']
+ r_min = 2 * self.ds.index.get_smallest_dx() * self.ds['mpc']
if (halo['r_max'] / r_min < PROFILE_RADIUS_THRESHOLD):
mylog.debug("Skipping halo with r_max / r_min = %f." % (halo['r_max']/r_min))
return None
@@ -626,7 +626,7 @@
and calculates bulk velocities.
"""
- sphere = self.pf.sphere(halo['center'], halo['r_max']/self.pf.units['mpc'])
+ sphere = self.ds.sphere(halo['center'], halo['r_max']/self.ds.units['mpc'])
new_sphere = False
if self.recenter:
@@ -637,22 +637,22 @@
else:
# user supplied function
new_x, new_y, new_z = self.recenter(sphere)
- if new_x < self.pf.domain_left_edge[0] or \
- new_y < self.pf.domain_left_edge[1] or \
- new_z < self.pf.domain_left_edge[2]:
+ if new_x < self.ds.domain_left_edge[0] or \
+ new_y < self.ds.domain_left_edge[1] or \
+ new_z < self.ds.domain_left_edge[2]:
mylog.info("Recentering rejected, skipping halo %d" % \
halo['id'])
return None
halo['center'] = [new_x, new_y, new_z]
- d = self.pf['kpc'] * periodic_dist(old, halo['center'],
- self.pf.domain_right_edge - self.pf.domain_left_edge)
+ d = self.ds['kpc'] * periodic_dist(old, halo['center'],
+ self.ds.domain_right_edge - self.ds.domain_left_edge)
mylog.info("Recentered halo %d %1.3e kpc away." % (halo['id'], d))
# Expand the halo to account for recentering.
halo['r_max'] += d / 1000. # d is in kpc -> want mpc
new_sphere = True
if new_sphere:
- sphere = self.pf.sphere(halo['center'], halo['r_max']/self.pf.units['mpc'])
+ sphere = self.ds.sphere(halo['center'], halo['r_max']/self.ds.units['mpc'])
if self._need_bulk_velocity:
# Set bulk velocity to zero out radial velocity profiles.
@@ -670,7 +670,7 @@
mylog.info('Setting bulk velocity with value at max %s.' % self.velocity_center[1])
max_val, maxi, mx, my, mz, mg = sphere.quantities['MaxLocation'](self.velocity_center[1])
- max_grid = self.pf.index.grids[mg]
+ max_grid = self.ds.index.grids[mg]
max_cell = np.unravel_index(maxi, max_grid.ActiveDimensions)
sphere.set_field_parameter('bulk_velocity', [max_grid['x-velocity'][max_cell],
max_grid['y-velocity'][max_cell],
@@ -747,20 +747,20 @@
# Set resolution for fixed resolution output.
if self.project_at_level == 'max':
- proj_level = self.pf.h.max_level
+ proj_level = self.ds.index.max_level
else:
proj_level = int(self.project_at_level)
- proj_dx = self.pf.units[self.projection_width_units] / \
- self.pf.parameters['TopGridDimensions'][0] / \
- (self.pf.parameters['RefineBy']**proj_level)
+ proj_dx = self.ds.units[self.projection_width_units] / \
+ self.ds.parameters['TopGridDimensions'][0] / \
+ (self.ds.parameters['RefineBy']**proj_level)
projectionResolution = int(self.projection_width / proj_dx)
# Create output directory.
self.__check_directory(self.projection_output_dir)
- center = [0.5 * (self.pf.parameters['DomainLeftEdge'][w] +
- self.pf.parameters['DomainRightEdge'][w])
- for w in range(self.pf.parameters['TopGridRank'])]
+ center = [0.5 * (self.ds.parameters['DomainLeftEdge'][w] +
+ self.ds.parameters['DomainRightEdge'][w])
+ for w in range(self.ds.parameters['TopGridRank'])]
for halo in parallel_objects(halo_projection_list, njobs=njobs, dynamic=dynamic):
if halo is None:
@@ -768,10 +768,10 @@
# Check if region will overlap domain edge.
# Using non-periodic regions is faster than using periodic ones.
leftEdge = [(halo['center'][w] -
- 0.5 * self.projection_width/self.pf.units[self.projection_width_units])
+ 0.5 * self.projection_width/self.ds.units[self.projection_width_units])
for w in range(len(halo['center']))]
rightEdge = [(halo['center'][w] +
- 0.5 * self.projection_width/self.pf.units[self.projection_width_units])
+ 0.5 * self.projection_width/self.ds.units[self.projection_width_units])
for w in range(len(halo['center']))]
mylog.info("Projecting halo %04d in region: [%f, %f, %f] to [%f, %f, %f]." %
@@ -780,15 +780,15 @@
need_per = False
for w in range(len(halo['center'])):
- if ((leftEdge[w] < self.pf.parameters['DomainLeftEdge'][w]) or
- (rightEdge[w] > self.pf.parameters['DomainRightEdge'][w])):
+ if ((leftEdge[w] < self.ds.parameters['DomainLeftEdge'][w]) or
+ (rightEdge[w] > self.ds.parameters['DomainRightEdge'][w])):
need_per = True
break
# We use the same type of region regardless. The selection will be
# correct, but we need the need_per variable for projection
# shifting.
- region = self.pf.region(halo['center'], leftEdge, rightEdge)
+ region = self.ds.region(halo['center'], leftEdge, rightEdge)
# Make projections.
if not isinstance(axes, types.ListType): axes = list([axes])
@@ -801,15 +801,15 @@
y_axis = coords[1]
for hp in self.projection_fields:
- projections.append(self.pf.proj(hp['field'], w,
+ projections.append(self.ds.proj(hp['field'], w,
weight_field=hp['weight_field'],
data_source=region,
center=halo['center']))
# Set x and y limits, shift image if it overlaps domain boundary.
if need_per:
- pw = self.projection_width/self.pf.units[self.projection_width_units]
- _shift_projections(self.pf, projections, halo['center'], center, w)
+ pw = self.projection_width/self.ds.units[self.projection_width_units]
+ _shift_projections(self.ds, projections, halo['center'], center, w)
# Projection has now been shifted to center of box.
proj_left = [center[x_axis]-0.5*pw, center[y_axis]-0.5*pw]
proj_right = [center[x_axis]+0.5*pw, center[y_axis]+0.5*pw]
@@ -933,9 +933,9 @@
if 'ActualOverdensity' in profile.keys():
return
- rhocritnow = rho_crit_g_cm3_h2 * self.pf.hubble_constant**2 # g cm^-3
- rho_crit = rhocritnow * ((1.0 + self.pf.current_redshift)**3.0)
- if not self.use_critical_density: rho_crit *= self.pf.omega_matter
+ rhocritnow = rho_crit_g_cm3_h2 * self.ds.hubble_constant**2 # g cm^-3
+ rho_crit = rhocritnow * ((1.0 + self.ds.current_redshift)**3.0)
+ if not self.use_critical_density: rho_crit *= self.ds.omega_matter
profile['ActualOverdensity'] = (mass_sun_cgs * profile['TotalMassMsun']) / \
profile['CellVolume'] / rho_crit
@@ -1007,11 +1007,11 @@
halo[field] = __get_num(onLine[self.halo_list_format[field]])
if getID: halo['id'] = len(haloList)
if self.halo_radius is not None:
- halo['r_max'] = self.halo_radius * self.pf.units['mpc']
+ halo['r_max'] = self.halo_radius * self.ds.units['mpc']
elif has_rmax:
- halo['r_max'] *= self.pf.units['mpc']
+ halo['r_max'] *= self.ds.units['mpc']
elif has_r200kpc:
- # If P-Groupfinder used, r_200 [kpc] is calculated.
+ # If P-Groudsinder used, r_200 [kpc] is calculated.
# set r_max as 50% past r_200.
halo['r_max'] = 1.5 * halo['r200kpc'] / 1000.
else:
@@ -1052,7 +1052,7 @@
fields.pop(0)
profile = {}
- profile_obj = FakeProfile(self.pf)
+ profile_obj = FakeProfile(self.ds)
for field in fields:
profile[field] = []
@@ -1112,7 +1112,7 @@
profile[field] = my_group[field][:]
in_file.close()
- profile_obj = FakeProfile(self.pf)
+ profile_obj = FakeProfile(self.ds)
profile_obj._data = profile
return profile_obj
@@ -1120,12 +1120,12 @@
def _run_hop(self, hop_file):
"Run hop to get halos."
- hop_results = self.halo_finder_function(self.pf, *self.halo_finder_args,
+ hop_results = self.halo_finder_function(self.ds, *self.halo_finder_args,
**self.halo_finder_kwargs)
hop_results.write_out(hop_file)
del hop_results
- self.pf.h.clear_all_data()
+ self.ds.index.clear_all_data()
@parallel_root_only
def _write_filtered_halo_list(self, filename, format="%s"):
@@ -1230,13 +1230,13 @@
else:
os.makedirs(my_output_dir)
-def _shift_projections(pf, projections, oldCenter, newCenter, axis):
+def _shift_projections(ds, projections, oldCenter, newCenter, axis):
"""
Shift projection data around.
This is necessary when projecting a preiodic region.
"""
offset = [newCenter[q]-oldCenter[q] for q in range(len(oldCenter))]
- width = [pf.parameters['DomainRightEdge'][q]-pf.parameters['DomainLeftEdge'][q] \
+ width = [ds.parameters['DomainRightEdge'][q]-ds.parameters['DomainLeftEdge'][q] \
for q in range(len(oldCenter))]
del offset[axis]
@@ -1326,9 +1326,9 @@
"""
This is used to mimic a profile object when reading profile data from disk.
"""
- def __init__(self, pf):
+ def __init__(self, ds):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self._data = {}
def __getitem__(self, key):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/halo_profiler/standard_analysis.py
--- a/yt/analysis_modules/halo_profiler/standard_analysis.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/halo_profiler/standard_analysis.py Sun Jun 15 19:50:51 2014 -0700
@@ -19,18 +19,18 @@
from yt.funcs import *
class StandardRadialAnalysis(object):
- def __init__(self, pf, center, radius, n_bins = 128, inner_radius = None):
+ def __init__(self, ds, center, radius, n_bins = 128, inner_radius = None):
raise NotImplementedError # see TODO
- self.pf = pf
+ self.ds = ds
# We actually don't want to replicate the handling of setting the
# center here, so we will pass it to the sphere creator.
# Note also that the sphere can handle (val, unit) for radius, so we
# will grab that from the sphere as well
- self.obj = pf.sphere(center, radius)
+ self.obj = ds.sphere(center, radius)
if inner_radius is None:
- inner_radius = pf.index.get_smallest_dx() * pf['cm']
+ inner_radius = ds.index.get_smallest_dx() * ds['cm']
self.inner_radius = inner_radius
- self.outer_radius = self.obj.radius * pf['cm']
+ self.outer_radius = self.obj.radius * ds['cm']
self.n_bins = n_bins
def setup_field_parameters(self):
@@ -51,7 +51,7 @@
else:
field, weight = fspec, "CellMassMsun"
by_weights[weight].append(field)
- known_fields = set(self.pf.field_list + self.pf.derived_field_list)
+ known_fields = set(self.ds.field_list + self.ds.derived_field_list)
for weight, fields in by_weights.items():
fields = set(fields)
fields.intersection_update(known_fields)
@@ -60,7 +60,7 @@
def plot_everything(self, dirname = None):
if not dirname:
- dirname = "%s_profile_plots/" % (self.pf)
+ dirname = "%s_profile_plots/" % (self.ds)
if not os.path.isdir(dirname):
os.makedirs(dirname)
import matplotlib; matplotlib.use("Agg")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/level_sets/clump_handling.py
--- a/yt/analysis_modules/level_sets/clump_handling.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/level_sets/clump_handling.py Sun Jun 15 19:50:51 2014 -0700
@@ -285,7 +285,7 @@
master.pass_down(pass_command)
master.pass_down("self.com = self.data.quantities['CenterOfMass']()")
- quantity = "((self.com[0]-self.masterCOM[0])**2 + (self.com[1]-self.masterCOM[1])**2 + (self.com[2]-self.masterCOM[2])**2)**(0.5)*self.data.pf.units['%s']" % units
+ quantity = "((self.com[0]-self.masterCOM[0])**2 + (self.com[1]-self.masterCOM[1])**2 + (self.com[2]-self.masterCOM[2])**2)**(0.5)*self.data.ds.units['%s']" % units
format = "%s%s%s" % ("'Distance from center: %.6e ",units,"' % value")
master.add_info_item(quantity,format)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/level_sets/contour_finder.py
--- a/yt/analysis_modules/level_sets/contour_finder.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/level_sets/contour_finder.py Sun Jun 15 19:50:51 2014 -0700
@@ -32,7 +32,7 @@
contours = {}
empty_mask = np.ones((1,1,1), dtype="uint8")
node_ids = []
- DLE = data_source.pf.domain_left_edge
+ DLE = data_source.ds.domain_left_edge
for (g, node, (sl, dims, gi)) in data_source.tiles.slice_traverse():
node.node_ind = len(node_ids)
nid = node.node_id
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,7 +25,7 @@
class ParticleTrajectories(object):
r"""A collection of particle trajectories in time over a series of
- parameter files.
+ datasets.
The ParticleTrajectories object contains a collection of
particle trajectories for a specified set of particle indices.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/photon_simulator/photon_models.py
--- a/yt/analysis_modules/photon_simulator/photon_models.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/photon_simulator/photon_models.py Sun Jun 15 19:50:51 2014 -0700
@@ -67,7 +67,7 @@
def __call__(self, data_source, parameters):
- pf = data_source.pf
+ ds = data_source.ds
exp_time = parameters["FiducialExposureTime"]
area = parameters["FiducialArea"]
@@ -75,7 +75,7 @@
D_A = parameters["FiducialAngularDiameterDistance"].in_cgs()
dist_fac = 1.0/(4.*np.pi*D_A.value*D_A.value*(1.+redshift)**3)
- vol_scale = 1.0/np.prod(pf.domain_width.in_cgs().to_ndarray())
+ vol_scale = 1.0/np.prod(ds.domain_width.in_cgs().to_ndarray())
num_cells = data_source["temperature"].shape[0]
start_c = comm.rank*num_cells/comm.size
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/photon_simulator/photon_simulator.py
--- a/yt/analysis_modules/photon_simulator/photon_simulator.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/photon_simulator/photon_simulator.py Sun Jun 15 19:50:51 2014 -0700
@@ -172,7 +172,7 @@
instead of it being determined from the *redshift* and given *cosmology*.
cosmology : `yt.utilities.cosmology.Cosmology`, optional
Cosmological information. If not supplied, we try to get
- the cosmology from the parameter file. Otherwise, \LambdaCDM with
+ the cosmology from the dataset. Otherwise, \LambdaCDM with
the default yt parameters is assumed.
Examples
@@ -184,7 +184,7 @@
>>> redshift = 0.05
>>> area = 6000.0
>>> time = 2.0e5
- >>> sp = pf.sphere("c", (500., "kpc"))
+ >>> sp = ds.sphere("c", (500., "kpc"))
>>> my_photons = PhotonList.from_user_model(sp, redshift, area,
... time, thermal_model)
@@ -214,7 +214,7 @@
>>> from scipy.stats import powerlaw
>>> def line_func(source, parameters):
...
- ... pf = source.pf
+ ... ds = source.ds
...
... num_photons = parameters["num_photons"]
... E0 = parameters["line_energy"] # Energies are in keV
@@ -228,7 +228,7 @@
... photons["vx"] = np.zeros((1))
... photons["vy"] = np.zeros((1))
... photons["vz"] = 100.*np.ones((1))
- ... photons["dx"] = source["dx"][0]*pf.units["kpc"]*np.ones((1))
+ ... photons["dx"] = source["dx"][0]*ds.units["kpc"]*np.ones((1))
... photons["NumberOfPhotons"] = num_photons*np.ones((1))
... photons["Energy"] = np.array(energies)
>>>
@@ -239,21 +239,21 @@
... "line_sigma" : 0.1}
>>> ddims = (128,128,128)
>>> random_data = {"Density":np.random.random(ddims)}
- >>> pf = load_uniform_grid(random_data, ddims)
- >>> dd = pf.h.all_data
+ >>> ds = load_uniform_grid(random_data, ddims)
+ >>> dd = ds.all_data
>>> my_photons = PhotonList.from_user_model(dd, redshift, area,
... time, line_func)
"""
- pf = data_source.pf
+ ds = data_source.ds
if parameters is None:
parameters = {}
if cosmology is None:
- hubble = getattr(pf, "hubble_constant", None)
- omega_m = getattr(pf, "omega_matter", None)
- omega_l = getattr(pf, "omega_lambda", None)
+ hubble = getattr(ds, "hubble_constant", None)
+ omega_m = getattr(ds, "omega_matter", None)
+ omega_l = getattr(ds, "omega_lambda", None)
if hubble == 0: hubble = None
if hubble is not None and \
omega_m is not None and \
@@ -274,14 +274,14 @@
redshift = 0.0
if center == "c":
- parameters["center"] = pf.domain_center
+ parameters["center"] = ds.domain_center
elif center == "max":
- parameters["center"] = pf.h.find_max("Density")[-1]
+ parameters["center"] = ds.find_max("density")[-1]
elif iterable(center):
if isinstance(center, YTArray):
parameters["center"] = center.in_units("code_length")
else:
- parameters["center"] = pf.arr(center, "code_length")
+ parameters["center"] = ds.arr(center, "code_length")
elif center is None:
parameters["center"] = data_source.get_field_parameter("center")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/photon_simulator/tests/test_cluster.py
--- a/yt/analysis_modules/photon_simulator/tests/test_cluster.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/photon_simulator/tests/test_cluster.py Sun Jun 15 19:50:51 2014 -0700
@@ -13,7 +13,7 @@
from yt.testing import *
from yt.config import ytcfg
from yt.analysis_modules.photon_simulator.api import *
-from yt.utilities.answer_testing.framework import requires_pf, \
+from yt.utilities.answer_testing.framework import requires_ds, \
GenericArrayTest, data_dir_load
import numpy as np
@@ -29,7 +29,7 @@
ARF = test_dir+"/xray_data/chandra_ACIS-S3_onaxis_arf.fits"
RMF = test_dir+"/xray_data/chandra_ACIS-S3_onaxis_rmf.fits"
-@requires_pf(ETC)
+@requires_ds(ETC)
@requires_file(APEC)
@requires_file(TBABS)
@requires_file(ARF)
@@ -38,7 +38,7 @@
np.random.seed(seed=0x4d3d3d3)
- pf = data_dir_load(ETC)
+ ds = data_dir_load(ETC)
A = 3000.
exp_time = 1.0e5
redshift = 0.1
@@ -46,7 +46,7 @@
apec_model = TableApecModel(APEC, 0.1, 20.0, 2000)
tbabs_model = TableAbsorbModel(TBABS, 0.1)
- sphere = pf.sphere("max", (0.5, "mpc"))
+ sphere = ds.sphere("max", (0.5, "mpc"))
thermal_model = ThermalPhotonModel(apec_model, Zmet=0.3)
photons = PhotonList.from_scratch(sphere, redshift, A, exp_time,
@@ -59,7 +59,7 @@
def photons_test(): return photons.photons
def events_test(): return events.events
- for test in [GenericArrayTest(pf, photons_test),
- GenericArrayTest(pf, events_test)]:
+ for test in [GenericArrayTest(ds, photons_test),
+ GenericArrayTest(ds, events_test)]:
test_etc.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/radial_column_density/radial_column_density.py
--- a/yt/analysis_modules/radial_column_density/radial_column_density.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/radial_column_density/radial_column_density.py Sun Jun 15 19:50:51 2014 -0700
@@ -50,7 +50,7 @@
Parameters
----------
- pf : `Dataset`
+ ds : `Dataset`
The dataset to operate on.
field : string
The name of the basis field over which to
@@ -86,15 +86,15 @@
Examples
--------
- >>> rcdnumdens = RadialColumnDensity(pf, 'NumberDensity', [0.5, 0.5, 0.5])
+ >>> rcdnumdens = RadialColumnDensity(ds, 'NumberDensity', [0.5, 0.5, 0.5])
>>> def _RCDNumberDensity(field, data, rcd = rcdnumdens):
return rcd._build_derived_field(data)
>>> add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
"""
- def __init__(self, pf, field, center, max_radius = 0.5, steps = 10,
+ def __init__(self, ds, field, center, max_radius = 0.5, steps = 10,
base='lin', Nside = 32, ang_divs = 800j):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self.center = np.asarray(center)
self.max_radius = max_radius
self.steps = steps
@@ -109,7 +109,7 @@
self.dtheta = self.theta1d[1] - self.theta1d[0]
self.pixi = au.arr_ang2pix_nest(Nside, self.theta.ravel(), self.
phi.ravel())
- self.dw = pf.domain_right_edge - pf.domain_left_edge
+ self.dw = ds.domain_right_edge - ds.domain_left_edge
# Here's where we actually do stuff.
self._fix_max_radius()
self._make_bins()
@@ -123,8 +123,8 @@
# It may be possible to fix this in
# the future, and allow these calculations in the whole volume,
# but this will work for now.
- right = self.pf.domain_right_edge - self.center
- left = self.center - self.pf.domain_left_edge
+ right = self.ds.domain_right_edge - self.center
+ left = self.center - self.ds.domain_left_edge
min_r = np.min(right)
min_l = np.min(left)
self.max_radius = np.min([self.max_radius, min_r, min_l])
@@ -134,10 +134,10 @@
# specified radius. Column density inside the same cell as our
# center is kind of ill-defined, anyway.
if self.base == 'lin':
- self.bins = np.linspace(self.pf.index.get_smallest_dx(), self.max_radius,
+ self.bins = np.linspace(self.ds.index.get_smallest_dx(), self.max_radius,
self.steps)
elif self.base == 'log':
- self.bins = np.logspace(np.log10(self.pf.index.get_smallest_dx()),
+ self.bins = np.logspace(np.log10(self.ds.index.get_smallest_dx()),
np.log10(self.max_radius), self.steps)
def _build_surfaces(self, field):
@@ -145,9 +145,9 @@
self.surfaces = {}
for i, radius in enumerate(self.bins):
cam = camera.HEALpixCamera(self.center, radius, self.Nside,
- pf = self.pf, log_fields = [False], fields = field)
+ ds = self.ds, log_fields = [False], fields = field)
bitmap = cam.snapshot()
- self.surfaces[i] = radius * self.pf['cm'] * \
+ self.surfaces[i] = radius * self.ds['cm'] * \
bitmap[:,0,0][self.pixi].reshape((self.real_ang_divs,self.real_ang_divs))
self.surfaces[i] = self.comm.mpi_allreduce(self.surfaces[i], op='max')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/radmc3d_export/RadMC3DInterface.py
--- a/yt/analysis_modules/radmc3d_export/RadMC3DInterface.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/radmc3d_export/RadMC3DInterface.py Sun Jun 15 19:50:51 2014 -0700
@@ -69,8 +69,8 @@
Parameters
----------
- pf : `Dataset`
- This is the parameter file object corresponding to the
+ ds : `Dataset`
+ This is the dataset object corresponding to the
simulation output to be written out.
max_level : int
@@ -98,8 +98,8 @@
... return 10.0*data["Ones"]
>>> add_field("DustTemperature", function=_DustTemperature)
- >>> pf = load("galaxy0030/galaxy0030")
- >>> writer = RadMC3DWriter(pf)
+ >>> ds = load("galaxy0030/galaxy0030")
+ >>> writer = RadMC3DWriter(ds)
>>> writer.write_amr_grid()
>>> writer.write_dust_file("DustDensity", "dust_density.inp")
@@ -119,8 +119,8 @@
... return (x_co/mu_h)*data["Density"]
>>> add_field("NumberDensityCO", function=_NumberDensityCO)
- >>> pf = load("galaxy0030/galaxy0030")
- >>> writer = RadMC3DWriter(pf)
+ >>> ds = load("galaxy0030/galaxy0030")
+ >>> writer = RadMC3DWriter(ds)
>>> writer.write_amr_grid()
>>> writer.write_line_file("NumberDensityCO", "numberdens_co.inp")
@@ -129,15 +129,15 @@
'''
- def __init__(self, pf, max_level=2):
+ def __init__(self, ds, max_level=2):
self.max_level = max_level
self.cell_count = 0
self.layers = []
- self.domain_dimensions = pf.domain_dimensions
- self.domain_left_edge = pf.domain_left_edge
- self.domain_right_edge = pf.domain_right_edge
+ self.domain_dimensions = ds.domain_dimensions
+ self.domain_left_edge = ds.domain_left_edge
+ self.domain_right_edge = ds.domain_right_edge
self.grid_filename = "amr_grid.inp"
- self.pf = pf
+ self.ds = ds
base_layer = RadMC3DLayer(0, None, 0, \
self.domain_left_edge, \
@@ -145,9 +145,9 @@
self.domain_dimensions)
self.layers.append(base_layer)
- self.cell_count += np.product(pf.domain_dimensions)
+ self.cell_count += np.product(ds.domain_dimensions)
- sorted_grids = sorted(pf.index.grids, key=lambda x: x.Level)
+ sorted_grids = sorted(ds.index.grids, key=lambda x: x.Level)
for grid in sorted_grids:
if grid.Level <= self.max_level:
self._add_grid_to_layers(grid)
@@ -183,8 +183,8 @@
RE = self.domain_right_edge
# Radmc3D wants the cell wall positions in cgs. Convert here:
- LE_cgs = LE * self.pf.units['cm']
- RE_cgs = RE * self.pf.units['cm']
+ LE_cgs = LE * self.ds.units['cm']
+ RE_cgs = RE * self.ds.units['cm']
# calculate cell wall positions
xs = [str(x) for x in np.linspace(LE_cgs[0], RE_cgs[0], dims[0]+1)]
@@ -242,7 +242,7 @@
grid_file.close()
def _write_layer_data_to_file(self, fhandle, field, level, LE, dim):
- cg = self.pf.covering_grid(level, LE, dim, num_ghost_zones=1)
+ cg = self.ds.covering_grid(level, LE, dim, num_ghost_zones=1)
if isinstance(field, list):
data_x = cg[field[0]]
data_y = cg[field[1]]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
--- a/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py Sun Jun 15 19:50:51 2014 -0700
@@ -183,8 +183,8 @@
>>> from yt.mods import *
>>> from yt.analysis_modules.spectral_integrator.api import *
>>> add_xray_emissivity_field(0.5, 2)
- >>> pf = load(dataset)
- >>> p = ProjectionPlot(pf, 'x', "Xray_Emissivity_0.5_2keV")
+ >>> ds = load(dataset)
+ >>> p = ProjectionPlot(ds, 'x', "Xray_Emissivity_0.5_2keV")
>>> p.save()
"""
@@ -256,8 +256,8 @@
>>> from yt.mods import *
>>> from yt.analysis_modules.spectral_integrator.api import *
>>> add_xray_luminosity_field(0.5, 2)
- >>> pf = load(dataset)
- >>> sp = pf.sphere('max', (2., 'mpc'))
+ >>> ds = load(dataset)
+ >>> sp = ds.sphere('max', (2., 'mpc'))
>>> print sp.quantities['TotalQuantity']('Xray_Luminosity_0.5_2keV')
"""
@@ -313,8 +313,8 @@
>>> from yt.mods import *
>>> from yt.analysis_modules.spectral_integrator.api import *
>>> add_xray_emissivity_field(0.5, 2)
- >>> pf = load(dataset)
- >>> p = ProjectionPlot(pf, 'x', "Xray_Emissivity_0.5_2keV")
+ >>> ds = load(dataset)
+ >>> p = ProjectionPlot(ds, 'x', "Xray_Emissivity_0.5_2keV")
>>> p.save()
"""
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/star_analysis/sfr_spectrum.py
--- a/yt/analysis_modules/star_analysis/sfr_spectrum.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/star_analysis/sfr_spectrum.py Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
Parameters
----------
- pf : EnzoDataset object
+ ds : EnzoDataset object
data_source : AMRRegion object, optional
The region from which stars are extracted for analysis. If this
is not supplied, the next three must be, otherwise the next
@@ -51,13 +51,13 @@
Examples
--------
- >>> pf = load("RedshiftOutput0000")
- >>> sp = pf.sphere([0.5,0.5,0.5], [.1])
- >>> sfr = StarFormationRate(pf, sp)
+ >>> ds = load("RedshiftOutput0000")
+ >>> sp = ds.sphere([0.5,0.5,0.5], [.1])
+ >>> sfr = StarFormationRate(ds, sp)
"""
- def __init__(self, pf, data_source=None, star_mass=None,
+ def __init__(self, ds, data_source=None, star_mass=None,
star_creation_time=None, volume=None, bins=300):
- self._pf = pf
+ self._ds = ds
self._data_source = data_source
self.star_mass = np.array(star_mass)
self.star_creation_time = np.array(star_creation_time)
@@ -80,12 +80,12 @@
self.mode = 'data_source'
# Set up for time conversion.
self.cosm = Cosmology(
- hubble_constant = self._pf.hubble_constant,
- omega_matter = self._pf.omega_matter,
- omega_lambda = self._pf.omega_lambda)
+ hubble_constant = self._ds.hubble_constant,
+ omega_matter = self._ds.omega_matter,
+ omega_lambda = self._ds.omega_lambda)
# Find the time right now.
self.time_now = self.cosm.t_from_z(
- self._pf.current_redshift) # seconds
+ self._ds.current_redshift) # seconds
# Build the distribution.
self.build_dist()
# Attach some convenience arrays.
@@ -109,7 +109,7 @@
# Find the oldest stars in units of code time.
tmin= min(ct_stars)
# Multiply the end to prevent numerical issues.
- self.time_bins = np.linspace(tmin*1.01, self._pf.current_time,
+ self.time_bins = np.linspace(tmin*1.01, self._ds.current_time,
num = self.bin_count + 1)
# Figure out which bins the stars go into.
inds = np.digitize(ct_stars, self.time_bins) - 1
@@ -138,7 +138,7 @@
vol = ds.volume('mpccm')
elif self.mode == 'provided':
vol = self.volume('mpccm')
- tc = self._pf["Time"]
+ tc = self._ds["Time"]
self.time = []
self.lookback_time = []
self.redshift = []
@@ -249,7 +249,7 @@
Parameters
----------
- pf : EnzoDataset object
+ ds : EnzoDataset object
bcdir : String
Path to directory containing Bruzual & Charlot h5 fit files.
model : String
@@ -258,11 +258,11 @@
Examples
--------
- >>> pf = load("RedshiftOutput0000")
- >>> spec = SpectrumBuilder(pf, "/home/user/bc/", model="salpeter")
+ >>> ds = load("RedshiftOutput0000")
+ >>> spec = SpectrumBuilder(ds, "/home/user/bc/", model="salpeter")
"""
- def __init__(self, pf, bcdir="", model="chabrier", time_now=None):
- self._pf = pf
+ def __init__(self, ds, bcdir="", model="chabrier", time_now=None):
+ self._ds = ds
self.bcdir = bcdir
if model == "chabrier":
@@ -271,14 +271,14 @@
self.model = SALPETER
# Set up for time conversion.
self.cosm = Cosmology(
- hubble_constant = self._pf.hubble_constant,
- omega_matter = self._pf.omega_matter,
- omega_lambda = self._pf.omega_lambda)
+ hubble_constant = self._ds.hubble_constant,
+ omega_matter = self._ds.omega_matter,
+ omega_lambda = self._ds.omega_lambda)
# Find the time right now.
if time_now is None:
self.time_now = self.cosm.t_from_z(
- self._pf.current_redshift) # seconds
+ self._ds.current_redshift) # seconds
else:
self.time_now = time_now
@@ -331,7 +331,7 @@
Examples
--------
- >>> sp = pf.sphere([0.5,0.5,0.5], [.1])
+ >>> sp = ds.sphere([0.5,0.5,0.5], [.1])
>>> spec.calculate_spectrum(data_source=sp, min_age = 1.e6)
"""
# Initialize values
@@ -384,7 +384,7 @@
# Fix metallicity to units of Zsun.
self.star_metal /= Zsun
# Age of star in years.
- dt = (self.time_now - self.star_creation_time * self._pf['Time']) / YEAR
+ dt = (self.time_now - self.star_creation_time * self._ds['Time']) / YEAR
dt = np.maximum(dt, 0.0)
# Remove young stars
sub = dt >= self.min_age
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/sunrise_export/sunrise_exporter.py
--- a/yt/analysis_modules/sunrise_export/sunrise_exporter.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/sunrise_export/sunrise_exporter.py Sun Jun 15 19:50:51 2014 -0700
@@ -27,12 +27,12 @@
sec_per_year
from yt.mods import *
-def export_to_sunrise(pf, fn, star_particle_type, fc, fwidth, ncells_wide=None,
+def export_to_sunrise(ds, fn, star_particle_type, fc, fwidth, ncells_wide=None,
debug=False,dd=None,**kwargs):
r"""Convert the contents of a dataset to a FITS file format that Sunrise
understands.
- This function will accept a parameter file, and from that parameter file
+ This function will accept a dataset, and from that dataset
construct a depth-first octree containing all of the data in the parameter
file. This octree will be written to a FITS file. It will probably be
quite big, so use this function with caution! Sunrise is a tool for
@@ -41,8 +41,8 @@
Parameters
----------
- pf : `Dataset`
- The parameter file to convert.
+ ds : `Dataset`
+ The dataset to convert.
fn : string
The filename of the output FITS file.
fc : array
@@ -66,19 +66,19 @@
#we must round the dle,dre to the nearest root grid cells
ile,ire,super_level,ncells_wide= \
- round_ncells_wide(pf.domain_dimensions,fc-fwidth,fc+fwidth,nwide=ncells_wide)
+ round_ncells_wide(ds.domain_dimensions,fc-fwidth,fc+fwidth,nwide=ncells_wide)
assert np.all((ile-ire)==(ile-ire)[0])
mylog.info("rounding specified region:")
mylog.info("from [%1.5f %1.5f %1.5f]-[%1.5f %1.5f %1.5f]"%(tuple(fc-fwidth)+tuple(fc+fwidth)))
mylog.info("to [%07i %07i %07i]-[%07i %07i %07i]"%(tuple(ile)+tuple(ire)))
- fle,fre = ile*1.0/pf.domain_dimensions, ire*1.0/pf.domain_dimensions
+ fle,fre = ile*1.0/ds.domain_dimensions, ire*1.0/ds.domain_dimensions
mylog.info("to [%1.5f %1.5f %1.5f]-[%1.5f %1.5f %1.5f]"%(tuple(fle)+tuple(fre)))
#Create a list of the star particle properties in PARTICLE_DATA
#Include ID, parent-ID, position, velocity, creation_mass,
#formation_time, mass, age_m, age_l, metallicity, L_bol
- particle_data,nstars = prepare_star_particles(pf,star_particle_type,fle=fle,fre=fre,
+ particle_data,nstars = prepare_star_particles(ds,star_particle_type,fle=fle,fre=fre,
dd=dd,**kwargs)
#Create the refinement hilbert octree in GRIDSTRUCTURE
@@ -87,14 +87,14 @@
#since the octree always starts with one cell, an our 0-level mesh
#may have many cells, we must create the octree region sitting
#ontop of the first mesh by providing a negative level
- output, refinement,dd,nleaf = prepare_octree(pf,ile,start_level=super_level,
+ output, refinement,dd,nleaf = prepare_octree(ds,ile,start_level=super_level,
debug=debug,dd=dd,center=fc)
- create_fits_file(pf,fn, refinement,output,particle_data,fle,fre)
+ create_fits_file(ds,fn, refinement,output,particle_data,fle,fre)
return fle,fre,ile,ire,dd,nleaf,nstars
-def export_to_sunrise_from_halolist(pf,fni,star_particle_type,
+def export_to_sunrise_from_halolist(ds,fni,star_particle_type,
halo_list,domains_list=None,**kwargs):
"""
Using the center of mass and the virial radius
@@ -106,8 +106,8 @@
Parameters
----------
- pf : `Dataset`
- The parameter file to convert. We use the root grid to specify the domain.
+ ds : `Dataset`
+ The dataset to convert. We use the root grid to specify the domain.
fni : string
The filename of the output FITS file, but depends on the domain. The
dle and dre are appended to the name.
@@ -122,9 +122,9 @@
Organize halos into a dict of domains. Keys are DLE/DRE tuple
values are a list of halos
"""
- dn = pf.domain_dimensions
+ dn = ds.domain_dimensions
if domains_list is None:
- domains_list = domains_from_halos(pf,halo_list,**kwargs)
+ domains_list = domains_from_halos(ds,halo_list,**kwargs)
if fni.endswith('.fits'):
fni = fni.replace('.fits','')
@@ -147,18 +147,18 @@
fh = open(fnt,'w')
for halo in halos:
fh.write("%i "%halo.ID)
- fh.write("%6.6e "%(halo.CoM[0]*pf['kpc']))
- fh.write("%6.6e "%(halo.CoM[1]*pf['kpc']))
- fh.write("%6.6e "%(halo.CoM[2]*pf['kpc']))
+ fh.write("%6.6e "%(halo.CoM[0]*ds['kpc']))
+ fh.write("%6.6e "%(halo.CoM[1]*ds['kpc']))
+ fh.write("%6.6e "%(halo.CoM[2]*ds['kpc']))
fh.write("%6.6e "%(halo.Mvir))
- fh.write("%6.6e \n"%(halo.Rvir*pf['kpc']))
+ fh.write("%6.6e \n"%(halo.Rvir*ds['kpc']))
fh.close()
- export_to_sunrise(pf, fnf, star_particle_type, dle*1.0/dn, dre*1.0/dn)
+ export_to_sunrise(ds, fnf, star_particle_type, dle*1.0/dn, dre*1.0/dn)
ndomains_finished +=1
-def domains_from_halos(pf,halo_list,frvir=0.15):
+def domains_from_halos(ds,halo_list,frvir=0.15):
domains = {}
- dn = pf.domain_dimensions
+ dn = ds.domain_dimensions
for halo in halo_list:
fle, fre = halo.CoM-frvir*halo.Rvir,halo.CoM+frvir*halo.Rvir
dle,dre = np.floor(fle*dn), np.ceil(fre*dn)
@@ -176,10 +176,10 @@
domains_halos = [d[2] for d in domains_list]
return domains_list
-def prepare_octree(pf,ile,start_level=0,debug=True,dd=None,center=None):
+def prepare_octree(ds,ile,start_level=0,debug=True,dd=None,center=None):
if dd is None:
#we keep passing dd around to not regenerate the data all the time
- dd = pf.h.all_data()
+ dd = ds.all_data()
try:
dd['MetalMass']
except KeyError:
@@ -200,15 +200,15 @@
del field_data
#first we cast every cell as an oct
- #ngrids = np.max([g.id for g in pf._grids])
+ #ngrids = np.max([g.id for g in ds._grids])
grids = {}
levels_all = {}
levels_finest = {}
for l in range(100):
levels_finest[l]=0
levels_all[l]=0
- pbar = get_pbar("Initializing octs ",len(pf.index.grids))
- for gi,g in enumerate(pf.index.grids):
+ pbar = get_pbar("Initializing octs ",len(ds.index.grids))
+ for gi,g in enumerate(ds.index.grids):
ff = np.array([g[f] for f in fields])
og = amr_utils.OctreeGrid(
g.child_index_mask.astype('int32'),
@@ -246,9 +246,9 @@
start_time = time.time()
if debug:
if center is not None:
- c = center*pf['kpc']
+ c = center*ds['kpc']
else:
- c = ile*1.0/pf.domain_dimensions*pf['kpc']
+ c = ile*1.0/ds.domain_dimensions*ds['kpc']
printing = lambda x: print_oct(x)
else:
printing = None
@@ -351,7 +351,7 @@
-def create_fits_file(pf,fn, refined,output,particle_data,fle,fre):
+def create_fits_file(ds,fn, refined,output,particle_data,fle,fre):
#first create the grid structure
structure = pyfits.Column("structure", format="B", array=refined.astype("bool"))
cols = pyfits.ColDefs([structure])
@@ -360,8 +360,8 @@
st_table.header.update("hierarch lengthunit", "kpc", comment="Length unit for grid")
fdx = fre-fle
for i,a in enumerate('xyz'):
- st_table.header.update("min%s" % a, fle[i] * pf['kpc'])
- st_table.header.update("max%s" % a, fre[i] * pf['kpc'])
+ st_table.header.update("min%s" % a, fle[i] * ds['kpc'])
+ st_table.header.update("max%s" % a, fre[i] * ds['kpc'])
st_table.header.update("n%s" % a, fdx[i])
st_table.header.update("subdiv%s" % a, 2)
st_table.header.update("subdivtp", "OCTREE", "Type of grid subdivision")
@@ -399,7 +399,7 @@
col_list.append(pyfits.Column("gas_teff_m", format='D',
array=fd['TemperatureTimesCellMassMsun'], unit="K*Msun"))
col_list.append(pyfits.Column("cell_volume", format='D',
- array=fd['CellVolumeCode'].astype('float64')*pf['kpc']**3.0,
+ array=fd['CellVolumeCode'].astype('float64')*ds['kpc']**3.0,
unit="kpc^3"))
col_list.append(pyfits.Column("SFR", format='D',
array=np.zeros(size, dtype='D')))
@@ -414,7 +414,7 @@
col_list = [pyfits.Column("dummy", format="F", array=np.zeros(1, dtype='float32'))]
cols = pyfits.ColDefs(col_list)
md_table = pyfits.new_table(cols)
- md_table.header.update("snaptime", pf.current_time*pf['years'])
+ md_table.header.update("snaptime", ds.current_time*ds['years'])
md_table.name = "YT"
phdu = pyfits.PrimaryHDU()
@@ -475,8 +475,8 @@
assert abs(np.log2(nwide)-np.rint(np.log2(nwide)))<1e-5 #nwide should be a power of 2
return ile,ire,maxlevel,nwide
-def round_nearest_edge(pf,fle,fre):
- dds = pf.domain_dimensions
+def round_nearest_edge(ds,fle,fre):
+ dds = ds.domain_dimensions
ile = np.floor(fle*dds).astype('int')
ire = np.ceil(fre*dds).astype('int')
@@ -488,14 +488,14 @@
maxlevel = -np.rint(np.log2(width)).astype('int')
return ile,ire,maxlevel
-def prepare_star_particles(pf,star_type,pos=None,vel=None, age=None,
+def prepare_star_particles(ds,star_type,pos=None,vel=None, age=None,
creation_time=None,initial_mass=None,
current_mass=None,metallicity=None,
radius = None,
fle=[0.,0.,0.],fre=[1.,1.,1.],
dd=None):
if dd is None:
- dd = pf.h.all_data()
+ dd = ds.all_data()
idxst = dd["particle_type"] == star_type
#make sure we select more than a single particle
@@ -505,28 +505,28 @@
for ax in 'xyz']).transpose()
idx = idxst & np.all(pos>fle,axis=1) & np.all(pos<fre,axis=1)
assert np.sum(idx)>0
- pos = pos[idx]*pf['kpc'] #unitary units -> kpc
+ pos = pos[idx]*ds['kpc'] #unitary units -> kpc
if age is None:
- age = dd["particle_age"][idx]*pf['years'] # seconds->years
+ age = dd["particle_age"][idx]*ds['years'] # seconds->years
if vel is None:
vel = np.array([dd["particle_velocity_%s" % ax][idx]
for ax in 'xyz']).transpose()
# Velocity is cm/s, we want it to be kpc/yr
- #vel *= (pf["kpc"]/pf["cm"]) / (365*24*3600.)
+ #vel *= (ds["kpc"]/ds["cm"]) / (365*24*3600.)
vel *= kpc_per_cm * sec_per_year
if initial_mass is None:
#in solar masses
- initial_mass = dd["particle_mass_initial"][idx]*pf['Msun']
+ initial_mass = dd["particle_mass_initial"][idx]*ds['Msun']
if current_mass is None:
#in solar masses
- current_mass = dd["particle_mass"][idx]*pf['Msun']
+ current_mass = dd["particle_mass"][idx]*ds['Msun']
if metallicity is None:
#this should be in dimensionless units, metals mass / particle mass
metallicity = dd["particle_metallicity"][idx]
assert np.all(metallicity>0.0)
if radius is None:
radius = initial_mass*0.0+10.0/1000.0 #10pc radius
- formation_time = pf.current_time*pf['years']-age
+ formation_time = ds.current_time*ds['years']-age
#create every column
col_list = []
col_list.append(pyfits.Column("ID", format="J", array=np.arange(current_mass.size).astype('int32')))
@@ -565,12 +565,12 @@
def _initial_mass_cen_ostriker(field, data):
# SFR in a cell. This assumes stars were created by the Cen & Ostriker algorithm
# Check Grid_AddToDiskProfile.C and star_maker7.src
- star_mass_ejection_fraction = data.pf.get_parameter("StarMassEjectionFraction",float)
+ star_mass_ejection_fraction = data.ds.get_parameter("StarMassEjectionFraction",float)
star_maker_minimum_dynamical_time = 3e6 # years, which will get divided out
- dtForSFR = star_maker_minimum_dynamical_time / data.pf["years"]
- xv1 = ((data.pf["InitialTime"] - data["creation_time"])
+ dtForSFR = star_maker_minimum_dynamical_time / data.ds["years"]
+ xv1 = ((data.ds["InitialTime"] - data["creation_time"])
/ data["dynamical_time"])
- xv2 = ((data.pf["InitialTime"] + dtForSFR - data["creation_time"])
+ xv2 = ((data.ds["InitialTime"] + dtForSFR - data["creation_time"])
/ data["dynamical_time"])
denom = (1.0 - star_mass_ejection_fraction * (1.0 - (1.0 + xv1)*np.exp(-xv1)))
minitial = data["ParticleMassMsun"] / denom
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/sunyaev_zeldovich/projection.py
--- a/yt/analysis_modules/sunyaev_zeldovich/projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/sunyaev_zeldovich/projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -132,7 +132,7 @@
if center == "c":
ctr = self.ds.domain_center
elif center == "max":
- v, ctr = self.ds.h.find_max("density")
+ v, ctr = self.ds.find_max("density")
else:
ctr = center
@@ -199,7 +199,7 @@
if center == "c":
ctr = self.ds.domain_center
elif center == "max":
- v, ctr = self.ds.h.find_max("density")
+ v, ctr = self.ds.find_max("density")
else:
ctr = center
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
--- a/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.utilities.physical_constants import cm_per_kpc, K_per_keV, \
mh, cm_per_km, kboltz, Tcmb, hcgs, clight, sigma_thompson
from yt.testing import *
-from yt.utilities.answer_testing.framework import requires_pf, \
+from yt.utilities.answer_testing.framework import requires_ds, \
GenericArrayTest, data_dir_load, GenericImageTest
try:
from yt.analysis_modules.sunyaev_zeldovich.projection import SZProjection, I0
@@ -35,9 +35,9 @@
from yt.config import ytcfg
ytcfg["yt", "__withintesting"] = "True"
-def full_szpack3d(pf, xo):
- data = pf.index.grids[0]
- dz = pf.index.get_smallest_dx().in_units("cm")
+def full_szpack3d(ds, xo):
+ data = ds.index.grids[0]
+ dz = ds.index.get_smallest_dx().in_units("cm")
nx,ny,nz = data["density"].shape
dn = np.zeros((nx,ny,nz))
Dtau = np.array(sigma_thompson*data["density"]/(mh*mue)*dz)
@@ -91,50 +91,50 @@
dl = L/nz
- pf = load_uniform_grid(data, ddims, length_unit='cm', bbox=bbox)
- pf.h
+ ds = load_uniform_grid(data, ddims, length_unit='cm', bbox=bbox)
+ ds.index
- return pf
+ return ds
@requires_module("SZpack")
def test_projection():
- pf = setup_cluster()
- nx,ny,nz = pf.domain_dimensions
+ ds = setup_cluster()
+ nx,ny,nz = ds.domain_dimensions
xinit = np.array(1.0e9*hcgs*freqs/(kboltz*Tcmb))
- szprj = SZProjection(pf, freqs, mue=mue, high_order=True)
+ szprj = SZProjection(ds, freqs, mue=mue, high_order=True)
szprj.on_axis(2, nx=nx)
deltaI = np.zeros((3,nx,ny))
for i in xrange(3):
- deltaI[i,:,:] = full_szpack3d(pf, xinit[i])
+ deltaI[i,:,:] = full_szpack3d(ds, xinit[i])
yield assert_almost_equal, deltaI[i,:,:], np.array(szprj["%d_GHz" % int(freqs[i])]), 6
M7 = "DD0010/moving7_0010"
@requires_module("SZpack")
-@requires_pf(M7)
+@requires_ds(M7)
def test_M7_onaxis():
- pf = data_dir_load(M7)
- szprj = SZProjection(pf, freqs)
+ ds = data_dir_load(M7)
+ szprj = SZProjection(ds, freqs)
szprj.on_axis(2, nx=100)
def onaxis_array_func():
return szprj.data
def onaxis_image_func(filename_prefix):
szprj.write_png(filename_prefix)
- for test in [GenericArrayTest(pf, onaxis_array_func),
- GenericImageTest(pf, onaxis_image_func, 3)]:
+ for test in [GenericArrayTest(ds, onaxis_array_func),
+ GenericImageTest(ds, onaxis_image_func, 3)]:
test_M7_onaxis.__name__ = test.description
yield test
@requires_module("SZpack")
-@requires_pf(M7)
+@requires_ds(M7)
def test_M7_offaxis():
- pf = data_dir_load(M7)
- szprj = SZProjection(pf, freqs)
+ ds = data_dir_load(M7)
+ szprj = SZProjection(ds, freqs)
szprj.off_axis(np.array([0.1,-0.2,0.4]), nx=100)
def offaxis_array_func():
return szprj.data
def offaxis_image_func(filename_prefix):
szprj.write_png(filename_prefix)
- for test in [GenericArrayTest(pf, offaxis_array_func),
- GenericImageTest(pf, offaxis_image_func, 3)]:
+ for test in [GenericArrayTest(ds, offaxis_array_func),
+ GenericImageTest(ds, offaxis_image_func, 3)]:
test_M7_offaxis.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/analysis_modules/two_point_functions/two_point_functions.py
--- a/yt/analysis_modules/two_point_functions/two_point_functions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/analysis_modules/two_point_functions/two_point_functions.py Sun Jun 15 19:50:51 2014 -0700
@@ -80,12 +80,12 @@
Examples
--------
- >>> tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z"],
+ >>> tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z"],
... total_values=1e5, comm_size=10000,
... length_number=10, length_range=[1./128, .5],
... length_type="log")
"""
- def __init__(self, pf, fields, left_edge=None, right_edge=None,
+ def __init__(self, ds, fields, left_edge=None, right_edge=None,
total_values=1000000, comm_size=10000, length_type="lin",
length_number=10, length_range=None, vol_ratio = 1,
salt=0, theta=None, phi=None):
@@ -110,15 +110,15 @@
self.send_hooks = []
self.done_hooks = []
self.comm_size = min(int(comm_size), self.total_values)
- self.pf = pf
- self.nlevels = pf.h.max_level
- self.period = self.pf.domain_right_edge - self.pf.domain_left_edge
+ self.ds = ds
+ self.nlevels = ds.index.max_level
+ self.period = self.ds.domain_right_edge - self.ds.domain_left_edge
self.min_edge = min(self.period)
- self.index = pf.h
- self.center = (pf.domain_right_edge + pf.domain_left_edge)/2.0
+ self.index = ds.index
+ self.center = (ds.domain_right_edge + ds.domain_left_edge)/2.0
# Figure out the range of ruler lengths.
if length_range == None:
- length_range = [math.sqrt(3) * self.pf.index.get_smallest_dx(),
+ length_range = [math.sqrt(3) * self.ds.index.get_smallest_dx(),
self.min_edge/2.]
else:
if len(length_range) != 2:
@@ -130,8 +130,8 @@
mylog.info("Automatically adjusting length_range[1] to half the shortest box edge.")
if length_range[0] == -1 or length_range[0] == -1.:
mylog.info("Automatically adjusting length_range[0] to %1.5e." % \
- (math.sqrt(3) * self.pf.index.get_smallest_dx()))
- length_range[0] = math.sqrt(3) * self.pf.index.get_smallest_dx()
+ (math.sqrt(3) * self.ds.index.get_smallest_dx()))
+ length_range[0] = math.sqrt(3) * self.ds.index.get_smallest_dx()
# Make the list of ruler lengths.
if length_type == "lin":
self.lengths = np.linspace(length_range[0], length_range[1],
@@ -144,12 +144,12 @@
raise SyntaxError("length_type is either \"lin\" or \"log\".")
# Subdivide the volume.
if not left_edge or not right_edge:
- self.left_edge = self.pf.domain_left_edge
- self.right_edge = self.pf.domain_right_edge
+ self.left_edge = self.ds.domain_left_edge
+ self.right_edge = self.ds.domain_right_edge
# This ds business below has to do with changes made for halo
# finding on subvolumes and serves no purpose here except
# compatibility. This is not the best policy, if I'm honest.
- ds = pf.region([0.]*3, self.left_edge, self.right_edge)
+ ds = ds.region([0.]*3, self.left_edge, self.right_edge)
padded, self.LE, self.RE, self.ds = \
self.partition_index_3d(ds = ds, padding=0.,
rank_ratio = self.vol_ratio)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/config.py
--- a/yt/config.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/config.py Sun Jun 15 19:50:51 2014 -0700
@@ -38,7 +38,7 @@
__command_line = 'False',
storeparameterfiles = 'False',
parameterfilestore = 'parameter_files.csv',
- maximumstoredpfs = '500',
+ maximumstoreddatasets = '500',
loadfieldplugins = 'True',
pluginfilename = 'my_plugins.py',
parallel_traceback = 'False',
diff -r f20d58ca2848 -r 67507b4f8da9 yt/convenience.py
--- a/yt/convenience.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/convenience.py Sun Jun 15 19:50:51 2014 -0700
@@ -95,13 +95,13 @@
mylog.error(" Possible: %s", c)
raise YTOutputNotIdentified(args, kwargs)
-def projload(pf, axis, weight_field = None):
+def projload(ds, axis, weight_field = None):
# This is something of a hack, so that we can just get back a projection
# and not utilize any of the intermediate index objects.
class ProjMock(dict):
pass
import h5py
- f = h5py.File(os.path.join(pf.fullpath, pf.parameter_filename + ".yt"))
+ f = h5py.File(os.path.join(ds.fullpath, ds.parameter_filename + ".yt"))
b = f["/Projections/%s/" % (axis)]
wf = "weight_field_%s" % weight_field
if wf not in b: raise KeyError(wf)
@@ -117,7 +117,7 @@
new_name = f[:-(len(weight_field) + 1)]
proj[new_name] = b[f][:]
proj.axis = axis
- proj.pf = pf
+ proj.ds = ds
f.close()
return proj
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/analyzer_objects.py
--- a/yt/data_objects/analyzer_objects.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/analyzer_objects.py Sun Jun 15 19:50:51 2014 -0700
@@ -60,8 +60,8 @@
return v
@analysis_task()
-def CurrentTimeYears(params, pf):
- return pf.current_time * pf["years"]
+def CurrentTimeYears(params, ds):
+ return ds.current_time * ds["years"]
class SlicePlotDataset(AnalysisTask):
_params = ['field', 'axis', 'center']
@@ -71,8 +71,8 @@
self.SlicePlot = SlicePlot
AnalysisTask.__init__(self, *args, **kwargs)
- def eval(self, pf):
- slc = self.SlicePlot(pf, self.axis, self.field, center = self.center)
+ def eval(self, ds):
+ slc = self.SlicePlot(ds, self.axis, self.field, center = self.center)
return slc.save()
class QuantityProxy(AnalysisTask):
@@ -104,8 +104,8 @@
cast = lambda a: a
self.cast = cast
- def eval(self, pf):
- return self.cast(pf.get_parameter(self.parameter))
+ def eval(self, ds):
+ return self.cast(ds.get_parameter(self.parameter))
def create_quantity_proxy(quantity_object):
args, varargs, kwargs, defaults = inspect.getargspec(quantity_object[1])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/construction_data_containers.py Sun Jun 15 19:50:51 2014 -0700
@@ -74,7 +74,7 @@
fields : list of strings, optional
If you want the object to pre-retrieve a set of fields, supply them
here. This is not necessary.
- pf : Parameter file object
+ ds : dataset object
Passed in to access the index
kwargs : dict of items
Any additional values are passed as field parameters that can be
@@ -84,7 +84,7 @@
--------
>>> from yt.visualization.api import Streamlines
- >>> streamlines = Streamlines(pf, [0.5]*3)
+ >>> streamlines = Streamlines(ds, [0.5]*3)
>>> streamlines.integrate_through_volume()
>>> stream = streamlines.path(0)
>>> matplotlib.pylab.semilogy(stream['t'], stream['Density'], '-x')
@@ -93,8 +93,8 @@
_type_name = "streamline"
_con_args = ('positions')
sort_by = 't'
- def __init__(self, positions, length = 1.0, fields=None, pf=None, **kwargs):
- YTSelectionContainer1D.__init__(self, pf, fields, **kwargs)
+ def __init__(self, positions, length = 1.0, fields=None, ds=None, **kwargs):
+ YTSelectionContainer1D.__init__(self, ds, fields, **kwargs)
self.positions = positions
self.dts = np.empty_like(positions[:,0])
self.dts[:-1] = np.sqrt(np.sum((self.positions[1:]-
@@ -110,8 +110,8 @@
def _get_list_of_grids(self):
# Get the value of the line at each LeftEdge and RightEdge
- LE = self.pf.grid_left_edge
- RE = self.pf.grid_right_edge
+ LE = self.ds.grid_left_edge
+ RE = self.ds.grid_right_edge
# Check left faces first
min_streampoint = np.min(self.positions, axis=0)
max_streampoint = np.max(self.positions, axis=0)
@@ -206,9 +206,9 @@
_con_args = ('axis', 'field', 'weight_field')
_container_fields = ('px', 'py', 'pdx', 'pdy', 'weight_field')
def __init__(self, field, axis, weight_field = None,
- center = None, pf = None, data_source = None,
+ center = None, ds = None, data_source = None,
style = "integrate", field_parameters = None):
- YTSelectionContainer2D.__init__(self, axis, pf, field_parameters)
+ YTSelectionContainer2D.__init__(self, axis, ds, field_parameters)
if style == "sum":
self.proj_style = "integrate"
self._sum_only = True
@@ -223,7 +223,7 @@
raise NotImplementedError(style)
self.weight_field = weight_field
self._set_center(center)
- if data_source is None: data_source = self.pf.h.all_data()
+ if data_source is None: data_source = self.ds.all_data()
self.data_source = data_source
self.weight_field = weight_field
self.get_data(field)
@@ -244,14 +244,14 @@
self._mrep.upload()
def _get_tree(self, nvals):
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
- xd = self.pf.domain_dimensions[xax]
- yd = self.pf.domain_dimensions[yax]
- bounds = (self.pf.domain_left_edge[xax],
- self.pf.domain_right_edge[yax],
- self.pf.domain_left_edge[xax],
- self.pf.domain_right_edge[yax])
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
+ xd = self.ds.domain_dimensions[xax]
+ yd = self.ds.domain_dimensions[yax]
+ bounds = (self.ds.domain_left_edge[xax],
+ self.ds.domain_right_edge[yax],
+ self.ds.domain_left_edge[xax],
+ self.ds.domain_right_edge[yax])
return QuadTree(np.array([xd,yd], dtype='int64'), nvals,
bounds, style = self.proj_style)
@@ -284,37 +284,37 @@
else:
raise NotImplementedError
# TODO: Add the combine operation
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
- ox = self.pf.domain_left_edge[xax]
- oy = self.pf.domain_left_edge[yax]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
+ ox = self.ds.domain_left_edge[xax]
+ oy = self.ds.domain_left_edge[yax]
px, py, pdx, pdy, nvals, nwvals = tree.get_all(False, merge_style)
nvals = self.comm.mpi_allreduce(nvals, op=op)
nwvals = self.comm.mpi_allreduce(nwvals, op=op)
- np.multiply(px, self.pf.domain_width[xax], px)
+ np.multiply(px, self.ds.domain_width[xax], px)
np.add(px, ox, px)
- np.multiply(pdx, self.pf.domain_width[xax], pdx)
+ np.multiply(pdx, self.ds.domain_width[xax], pdx)
- np.multiply(py, self.pf.domain_width[yax], py)
+ np.multiply(py, self.ds.domain_width[yax], py)
np.add(py, oy, py)
- np.multiply(pdy, self.pf.domain_width[yax], pdy)
+ np.multiply(pdy, self.ds.domain_width[yax], pdy)
if self.weight_field is not None:
np.divide(nvals, nwvals[:,None], nvals)
# We now convert to half-widths and center-points
data = {}
#non_nan = ~np.any(np.isnan(nvals), axis=-1)
- code_length = self.pf.domain_width.units
- data['px'] = self.pf.arr(px, code_length)
- data['py'] = self.pf.arr(py, code_length)
+ code_length = self.ds.domain_width.units
+ data['px'] = self.ds.arr(px, code_length)
+ data['py'] = self.ds.arr(py, code_length)
data['weight_field'] = nwvals
- data['pdx'] = self.pf.arr(pdx, code_length)
- data['pdy'] = self.pf.arr(pdy, code_length)
+ data['pdx'] = self.ds.arr(pdx, code_length)
+ data['pdy'] = self.ds.arr(pdy, code_length)
data['fields'] = nvals
# Now we run the finalizer, which is ignored if we don't need it
fd = data['fields']
field_data = np.hsplit(data.pop('fields'), len(fields))
for fi, field in enumerate(fields):
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
mylog.debug("Setting field %s", field)
units = finfo.units
if self.weight_field is None and not self._sum_only:
@@ -328,11 +328,11 @@
# Don't forget [non_nan] somewhere here.
self[field] = YTArray(field_data[fi].ravel(),
input_units=input_units,
- registry=self.pf.unit_registry)
+ registry=self.ds.unit_registry)
if self.weight_field is None and not self._sum_only:
- u_obj = Unit(units, registry=self.pf.unit_registry)
+ u_obj = Unit(units, registry=self.ds.unit_registry)
if u_obj.is_code_unit and input_units != units \
- or self.pf.no_cgs_equiv_length:
+ or self.ds.no_cgs_equiv_length:
if units is '':
final_unit = "code_length"
else:
@@ -343,11 +343,11 @@
def _initialize_chunk(self, chunk, tree):
icoords = chunk.icoords
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
i1 = icoords[:,xax]
i2 = icoords[:,yax]
- ilevel = chunk.ires * self.pf.ires_factor
+ ilevel = chunk.ires * self.ds.ires_factor
tree.initialize_chunk(i1, i2, ilevel)
def _handle_chunk(self, chunk, fields, tree):
@@ -367,11 +367,11 @@
else:
w = np.ones(chunk.ires.size, dtype="float64")
icoords = chunk.icoords
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
i1 = icoords[:,xax]
i2 = icoords[:,yax]
- ilevel = chunk.ires * self.pf.ires_factor
+ ilevel = chunk.ires * self.ds.ires_factor
tree.add_chunk_to_tree(i1, i2, ilevel, v, w)
def to_pw(self, fields=None, center='c', width=None, origin='center-window'):
@@ -402,7 +402,7 @@
Examples
--------
- >>> cube = pf.covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
+ >>> cube = ds.covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
... dims=[128, 128, 128])
"""
_spatial = True
@@ -416,28 +416,28 @@
("index", "z"))
_base_grid = None
def __init__(self, level, left_edge, dims, fields = None,
- pf = None, num_ghost_zones = 0, use_pbar = True,
+ ds = None, num_ghost_zones = 0, use_pbar = True,
field_parameters = None):
if field_parameters is None:
center = None
else:
center = field_parameters.get("center", None)
YTSelectionContainer3D.__init__(self,
- center, pf, field_parameters)
- self.left_edge = self.pf.arr(left_edge, 'code_length')
+ center, ds, field_parameters)
+ self.left_edge = self.ds.arr(left_edge, 'code_length')
self.level = level
self.ActiveDimensions = np.array(dims, dtype='int32')
- rdx = self.pf.domain_dimensions*self.pf.relative_refinement(0, level)
+ rdx = self.ds.domain_dimensions*self.ds.relative_refinement(0, level)
rdx[np.where(np.array(dims) - 2 * num_ghost_zones <= 1)] = 1 # issue 602
- self.base_dds = self.pf.domain_width / self.pf.domain_dimensions
- self.dds = self.pf.domain_width / rdx.astype("float64")
+ self.base_dds = self.ds.domain_width / self.ds.domain_dimensions
+ self.dds = self.ds.domain_width / rdx.astype("float64")
self.right_edge = self.left_edge + self.ActiveDimensions*self.dds
self._num_ghost_zones = num_ghost_zones
self._use_pbar = use_pbar
- self.global_startindex = np.rint((self.left_edge-self.pf.domain_left_edge)/self.dds).astype('int64')
- self.domain_width = np.rint((self.pf.domain_right_edge -
- self.pf.domain_left_edge)/self.dds).astype('int64')
+ self.global_startindex = np.rint((self.left_edge-self.ds.domain_left_edge)/self.dds).astype('int64')
+ self.domain_width = np.rint((self.ds.domain_right_edge -
+ self.ds.domain_left_edge)/self.dds).astype('int64')
self._setup_data_source()
@property
@@ -479,15 +479,15 @@
def _setup_data_source(self):
LE = self.left_edge - self.base_dds
RE = self.right_edge + self.base_dds
- if not all(self.pf.periodicity):
+ if not all(self.ds.periodicity):
for i in range(3):
- if self.pf.periodicity[i]: continue
- LE[i] = max(LE[i], self.pf.domain_left_edge[i])
- RE[i] = min(RE[i], self.pf.domain_right_edge[i])
- self._data_source = self.pf.region(self.center, LE, RE)
+ if self.ds.periodicity[i]: continue
+ LE[i] = max(LE[i], self.ds.domain_left_edge[i])
+ RE[i] = min(RE[i], self.ds.domain_right_edge[i])
+ self._data_source = self.ds.region(self.center, LE, RE)
self._data_source.min_level = 0
self._data_source.max_level = self.level
- self._pdata_source = self.pf.region(self.center,
+ self._pdata_source = self.ds.region(self.center,
self.left_edge, self.right_edge)
self._pdata_source.min_level = 0
self._pdata_source.max_level = self.level
@@ -507,13 +507,13 @@
fill, gen = self.index._split_fields(fields_to_get)
particles = []
for field in gen:
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
try:
finfo.check_available(self)
except NeedsOriginalGrid:
fill.append(field)
for field in fill:
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
if finfo.particle_type:
particles.append(field)
gen = [f for f in gen if f not in fill]
@@ -527,21 +527,21 @@
def _fill_fields(self, fields):
output_fields = [np.zeros(self.ActiveDimensions, dtype="float64")
for field in fields]
- domain_dims = self.pf.domain_dimensions.astype("int64") \
- * self.pf.relative_refinement(0, self.level)
+ domain_dims = self.ds.domain_dimensions.astype("int64") \
+ * self.ds.relative_refinement(0, self.level)
for chunk in self._data_source.chunks(fields, "io"):
input_fields = [chunk[field] for field in fields]
# NOTE: This usage of "refine_by" is actually *okay*, because it's
# being used with respect to iref, which is *already* scaled!
fill_region(input_fields, output_fields, self.level,
self.global_startindex, chunk.icoords, chunk.ires,
- domain_dims, self.pf.refine_by)
+ domain_dims, self.ds.refine_by)
for name, v in zip(fields, output_fields):
- fi = self.pf._get_field_info(*name)
- self[name] = self.pf.arr(v, fi.units)
+ fi = self.ds._get_field_info(*name)
+ self[name] = self.ds.arr(v, fi.units)
def _generate_container_field(self, field):
- rv = self.pf.arr(np.ones(self.ActiveDimensions, dtype="float64"),
+ rv = self.ds.arr(np.ones(self.ActiveDimensions, dtype="float64"),
"")
if field == ("index", "dx"):
np.multiply(rv, self.dds[0], rv)
@@ -603,7 +603,7 @@
Examples
--------
- >>> obj = pf.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99],
+ >>> obj = ds.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99],
... dims=[128, 128, 128])
"""
_spatial = True
@@ -616,12 +616,12 @@
("index", "y"),
("index", "z"))
def __init__(self, left_edge, right_edge, dims,
- pf = None, field_parameters = None):
+ ds = None, field_parameters = None):
if field_parameters is None:
center = None
else:
center = field_parameters.get("center", None)
- YTSelectionContainer3D.__init__(self, center, pf, field_parameters)
+ YTSelectionContainer3D.__init__(self, center, ds, field_parameters)
self.left_edge = np.array(left_edge)
self.right_edge = np.array(right_edge)
self.ActiveDimensions = np.array(dims, dtype='int32')
@@ -667,7 +667,7 @@
Example
-------
- cube = pf.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
+ cube = ds.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
dims=[128, 128, 128])
"""
_type_name = "smoothed_covering_grid"
@@ -675,8 +675,8 @@
@wraps(YTCoveringGridBase.__init__)
def __init__(self, *args, **kwargs):
self._base_dx = (
- (self.pf.domain_right_edge - self.pf.domain_left_edge) /
- self.pf.domain_dimensions.astype("float64"))
+ (self.ds.domain_right_edge - self.ds.domain_left_edge) /
+ self.ds.domain_dimensions.astype("float64"))
self.global_endindex = None
YTCoveringGridBase.__init__(self, *args, **kwargs)
self._final_start_index = self.global_startindex
@@ -685,7 +685,7 @@
if level_state is None: return
# We need a buffer region to allow for zones that contribute to the
# interpolation but are not directly inside our bounds
- level_state.data_source = self.pf.region(
+ level_state.data_source = self.ds.region(
self.center,
self.left_edge - level_state.current_dx,
self.right_edge + level_state.current_dx)
@@ -695,8 +695,8 @@
def _fill_fields(self, fields):
ls = self._initialize_level_state(fields)
for level in range(self.level + 1):
- domain_dims = self.pf.domain_dimensions.astype("int64") \
- * self.pf.relative_refinement(0, self.level)
+ domain_dims = self.ds.domain_dimensions.astype("int64") \
+ * self.ds.relative_refinement(0, self.level)
for chunk in ls.data_source.chunks(fields, "io"):
chunk[fields[0]]
input_fields = [chunk[field] for field in fields]
@@ -704,21 +704,21 @@
# being used with respect to iref, which is *already* scaled!
fill_region(input_fields, ls.fields, ls.current_level,
ls.global_startindex, chunk.icoords,
- chunk.ires, domain_dims, self.pf.refine_by)
+ chunk.ires, domain_dims, self.ds.refine_by)
self._update_level_state(ls)
for name, v in zip(fields, ls.fields):
if self.level > 0: v = v[1:-1,1:-1,1:-1]
- fi = self.pf._get_field_info(*name)
- self[name] = self.pf.arr(v, fi.units)
+ fi = self.ds._get_field_info(*name)
+ self[name] = self.ds.arr(v, fi.units)
def _initialize_level_state(self, fields):
ls = LevelState()
ls.current_dx = self._base_dx
ls.current_level = 0
- LL = self.left_edge - self.pf.domain_left_edge
+ LL = self.left_edge - self.ds.domain_left_edge
ls.global_startindex = np.rint(LL / ls.current_dx).astype('int64') - 1
- ls.domain_iwidth = np.rint((self.pf.domain_right_edge -
- self.pf.domain_left_edge)/ls.current_dx).astype('int64')
+ ls.domain_iwidth = np.rint((self.ds.domain_right_edge -
+ self.ds.domain_left_edge)/ls.current_dx).astype('int64')
if self.level > 0:
# We use one grid cell at LEAST, plus one buffer on all sides
width = self.right_edge-self.left_edge
@@ -734,16 +734,16 @@
def _update_level_state(self, level_state):
ls = level_state
if ls.current_level >= self.level: return
- rf = float(self.pf.relative_refinement(
+ rf = float(self.ds.relative_refinement(
ls.current_level, ls.current_level + 1))
ls.current_level += 1
ls.current_dx = self._base_dx / \
- self.pf.relative_refinement(0, ls.current_level)
+ self.ds.relative_refinement(0, ls.current_level)
self._setup_data_source(ls)
- LL = self.left_edge - self.pf.domain_left_edge
+ LL = self.left_edge - self.ds.domain_left_edge
ls.old_global_startindex = ls.global_startindex
ls.global_startindex = np.rint(LL / ls.current_dx).astype('int64') - 1
- ls.domain_iwidth = np.rint(self.pf.domain_width/ls.current_dx).astype('int64')
+ ls.domain_iwidth = np.rint(self.ds.domain_width/ls.current_dx).astype('int64')
input_left = (level_state.old_global_startindex + 0.5) * rf
width = (self.ActiveDimensions*self.dds)
output_dims = np.rint(width/level_state.current_dx+0.5).astype("int32") + 2
@@ -792,12 +792,12 @@
This will create a data object, find a nice value in the center, and
output the vertices to "triangles.obj" after rescaling them.
- >>> sp = pf.sphere("max", (10, "kpc")
- >>> surf = pf.surface(sp, "Density", 5e-27)
+ >>> sp = ds.sphere("max", (10, "kpc")
+ >>> surf = ds.surface(sp, "Density", 5e-27)
>>> print surf["Temperature"]
>>> print surf.vertices
- >>> bounds = [(sp.center[i] - 5.0/pf['kpc'],
- ... sp.center[i] + 5.0/pf['kpc']) for i in range(3)]
+ >>> bounds = [(sp.center[i] - 5.0/ds['kpc'],
+ ... sp.center[i] + 5.0/ds['kpc']) for i in range(3)]
>>> surf.export_ply("my_galaxy.ply", bounds = bounds)
"""
_type_name = "surface"
@@ -816,8 +816,8 @@
self.field_value = field_value
self.vertex_samples = YTFieldData()
center = data_source.get_field_parameter("center")
- super(YTSurfaceBase, self).__init__(center = center, pf =
- data_source.pf )
+ super(YTSurfaceBase, self).__init__(center = center, ds =
+ data_source.ds )
def _generate_container_field(self, field):
self.get_data(field)
@@ -916,8 +916,8 @@
This will create a data object, find a nice value in the center, and
calculate the metal flux over it.
- >>> sp = pf.sphere("max", (10, "kpc")
- >>> surf = pf.surface(sp, "Density", 5e-27)
+ >>> sp = ds.sphere("max", (10, "kpc")
+ >>> surf = ds.surface(sp, "Density", 5e-27)
>>> flux = surf.calculate_flux(
... "velocity_x", "velocity_y", "velocity_z", "Metal_Density")
"""
@@ -1000,25 +1000,25 @@
Examples
--------
- >>> sp = pf.sphere("max", (10, "kpc"))
+ >>> sp = ds.sphere("max", (10, "kpc"))
>>> trans = 1.0
>>> distf = 3.1e18*1e3 # distances into kpc
- >>> surf = pf.surface(sp, "Density", 5e-27)
+ >>> surf = ds.surface(sp, "Density", 5e-27)
>>> surf.export_obj("my_galaxy", transparency=trans, dist_fac = distf)
- >>> sp = pf.sphere("max", (10, "kpc"))
+ >>> sp = ds.sphere("max", (10, "kpc"))
>>> mi, ma = sp.quantities['Extrema']('Temperature')[0]
>>> rhos = [1e-24, 1e-25]
>>> trans = [0.5, 1.0]
>>> distf = 3.1e18*1e3 # distances into kpc
>>> for i, r in enumerate(rhos):
- ... surf = pf.surface(sp,'Density',r)
+ ... surf = ds.surface(sp,'Density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='Temperature', dist_fac = distf,
... plot_index = i, color_field_max = ma,
... color_field_min = mi)
- >>> sp = pf.sphere("max", (10, "kpc"))
+ >>> sp = ds.sphere("max", (10, "kpc"))
>>> rhos = [1e-24, 1e-25]
>>> trans = [0.5, 1.0]
>>> distf = 3.1e18*1e3 # distances into kpc
@@ -1026,7 +1026,7 @@
... return (data['Density']*data['Density']*np.sqrt(data['Temperature']))
>>> add_field("Emissivity", function=_Emissivity, units=r"\rm{g K}/\rm{cm}^{6}")
>>> for i, r in enumerate(rhos):
- ... surf = pf.surface(sp,'Density',r)
+ ... surf = ds.surface(sp,'Density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='Temperature', emit_field = 'Emissivity',
... dist_fac = distf, plot_index = i)
@@ -1134,8 +1134,8 @@
# interpolate emissivity to enumerated colors
emiss = np.interp(np.mgrid[0:lut[0].shape[0]],np.mgrid[0:len(cs)],f["emit"][:])
if dist_fac is None: # then normalize by bounds
- DLE = self.pf.domain_left_edge
- DRE = self.pf.domain_right_edge
+ DLE = self.ds.domain_left_edge
+ DRE = self.ds.domain_right_edge
bounds = [(DLE[i], DRE[i]) for i in range(3)]
for i, ax in enumerate("xyz"):
# Do the bounds first since we cast to f32
@@ -1198,12 +1198,12 @@
Examples
--------
- >>> sp = pf.sphere("max", (10, "kpc")
- >>> surf = pf.surface(sp, "Density", 5e-27)
+ >>> sp = ds.sphere("max", (10, "kpc")
+ >>> surf = ds.surface(sp, "Density", 5e-27)
>>> print surf["Temperature"]
>>> print surf.vertices
- >>> bounds = [(sp.center[i] - 5.0/pf['kpc'],
- ... sp.center[i] + 5.0/pf['kpc']) for i in range(3)]
+ >>> bounds = [(sp.center[i] - 5.0/ds['kpc'],
+ ... sp.center[i] + 5.0/ds['kpc']) for i in range(3)]
>>> surf.export_ply("my_galaxy.ply", bounds = bounds)
"""
if self.vertices is None:
@@ -1236,8 +1236,8 @@
else:
f = open(filename, "wb")
if bounds is None:
- DLE = self.pf.domain_left_edge
- DRE = self.pf.domain_right_edge
+ DLE = self.ds.domain_left_edge
+ DRE = self.ds.domain_right_edge
bounds = [(DLE[i], DRE[i]) for i in range(3)]
nv = self.vertices.shape[1]
vs = [("x", "<f"), ("y", "<f"), ("z", "<f"),
@@ -1334,13 +1334,13 @@
--------
>>> from yt.mods import *
- >>> pf = load("redshift0058")
- >>> dd = pf.sphere("max", (200, "kpc"))
+ >>> ds = load("redshift0058")
+ >>> dd = ds.sphere("max", (200, "kpc"))
>>> rho = 5e-27
- >>> bounds = [(dd.center[i] - 100.0/pf['kpc'],
- ... dd.center[i] + 100.0/pf['kpc']) for i in range(3)]
+ >>> bounds = [(dd.center[i] - 100.0/ds['kpc'],
+ ... dd.center[i] + 100.0/ds['kpc']) for i in range(3)]
...
- >>> surf = pf.surface(dd, "Density", rho)
+ >>> surf = ds.surface(dd, "Density", rho)
>>> rv = surf.export_sketchfab(
... title = "Testing Upload",
... description = "A simple test of the uploader",
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/data_containers.py Sun Jun 15 19:50:51 2014 -0700
@@ -93,19 +93,19 @@
_field_cache = None
_index = None
- def __init__(self, pf, field_parameters):
+ def __init__(self, ds, field_parameters):
"""
Typically this is never called directly, but only due to inheritance.
It associates a :class:`~yt.data_objects.api.Dataset` with the class,
sets its initial set of fields, and the remainder of the arguments
are passed as field_parameters.
"""
- if pf != None:
- self.pf = pf
+ if ds != None:
+ self.ds = ds
self._current_particle_type = "all"
- self._current_fluid_type = self.pf.default_fluid_type
- self.pf.objects.append(weakref.proxy(self))
- mylog.debug("Appending object to %s (type: %s)", self.pf, type(self))
+ self._current_fluid_type = self.ds.default_fluid_type
+ self.ds.objects.append(weakref.proxy(self))
+ mylog.debug("Appending object to %s (type: %s)", self.ds, type(self))
self.field_data = YTFieldData()
if field_parameters is None: field_parameters = {}
self._set_default_field_parameters()
@@ -117,20 +117,20 @@
def index(self):
if self._index is not None:
return self._index
- self._index = self.pf.index
+ self._index = self.ds.index
return self._index
def _set_default_field_parameters(self):
self.field_parameters = {}
self.set_field_parameter(
- "center",self.pf.arr(np.zeros(3,dtype='float64'),'cm'))
+ "center",self.ds.arr(np.zeros(3,dtype='float64'),'cm'))
self.set_field_parameter(
- "bulk_velocity",self.pf.arr(np.zeros(3,dtype='float64'),'cm/s'))
+ "bulk_velocity",self.ds.arr(np.zeros(3,dtype='float64'),'cm/s'))
self.set_field_parameter(
"normal",np.array([0,0,1],dtype='float64'))
def apply_units(self, arr, units):
- return self.pf.arr(arr, input_units = units)
+ return self.ds.arr(arr, input_units = units)
def _set_center(self, center):
if center is None:
@@ -138,24 +138,24 @@
self.set_field_parameter('center', self.center)
return
elif isinstance(center, YTArray):
- self.center = self.pf.arr(center.in_cgs())
+ self.center = self.ds.arr(center.in_cgs())
self.center.convert_to_units('code_length')
elif isinstance(center, (types.ListType, types.TupleType, np.ndarray)):
if isinstance(center[0], YTQuantity):
- self.center = self.pf.arr([c.in_cgs() for c in center])
+ self.center = self.ds.arr([c.in_cgs() for c in center])
self.center.convert_to_units('code_length')
else:
- self.center = self.pf.arr(center, 'code_length')
+ self.center = self.ds.arr(center, 'code_length')
elif isinstance(center, basestring):
if center.lower() in ("c", "center"):
- self.center = self.pf.domain_center
+ self.center = self.ds.domain_center
# is this dangerous for race conditions?
elif center.lower() in ("max", "m"):
- self.center = self.pf.h.find_max(("gas", "density"))[1]
+ self.center = self.ds.find_max(("gas", "density"))[1]
elif center.startswith("max_"):
- self.center = self.pf.h.find_max(center[4:])[1]
+ self.center = self.ds.find_max(center[4:])[1]
else:
- self.center = self.pf.arr(center, 'code_length', dtype='float64')
+ self.center = self.ds.arr(center, 'code_length', dtype='float64')
self.set_field_parameter('center', self.center)
def get_field_parameter(self, name, default=None):
@@ -186,7 +186,7 @@
This will attempt to convert a given unit to cgs from code units.
It either returns the multiplicative factor or throws a KeyError.
"""
- return self.pf[datatype]
+ return self.ds[datatype]
def clear_data(self):
"""
@@ -214,7 +214,7 @@
if f not in self.field_data and key not in self.field_data:
if f in self._container_fields:
self.field_data[f] = \
- self.pf.arr(self._generate_container_field(f))
+ self.ds.arr(self._generate_container_field(f))
return self.field_data[f]
else:
self.get_data(f)
@@ -225,10 +225,10 @@
rv = self.field_data.get(f, None)
if rv is None:
if isinstance(f, types.TupleType):
- fi = self.pf._get_field_info(*f)
+ fi = self.ds._get_field_info(*f)
elif isinstance(f, types.StringType):
- fi = self.pf._get_field_info("unknown", f)
- rv = self.pf.arr(self.field_data[key], fi.units)
+ fi = self.ds._get_field_info("unknown", f)
+ rv = self.ds.arr(self.field_data[key], fi.units)
return rv
def __setitem__(self, key, val):
@@ -247,7 +247,7 @@
def _generate_field(self, field):
ftype, fname = field
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
with self._field_type_state(ftype, finfo):
if fname in self._container_fields:
tr = self._generate_container_field(field)
@@ -256,13 +256,13 @@
else:
tr = self._generate_fluid_field(field)
if tr is None:
- raise YTCouldNotGenerateField(field, self.pf)
+ raise YTCouldNotGenerateField(field, self.ds)
return tr
def _generate_fluid_field(self, field):
# First we check the validator
ftype, fname = field
- finfo = self.pf._get_field_info(ftype, fname)
+ finfo = self.ds._get_field_info(ftype, fname)
if self._current_chunk is None or \
self._current_chunk.chunk_type != "spatial":
gen_obj = self
@@ -310,7 +310,7 @@
else:
gen_obj = self._current_chunk.objs[0]
try:
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
finfo.check_available(gen_obj)
except NeedsGridType as ngt_exception:
if ngt_exception.ghost_zones != 0:
@@ -332,7 +332,7 @@
ind += data.size
else:
with self._field_type_state(ftype, finfo, gen_obj):
- rv = self.pf._get_field_info(*field)(gen_obj)
+ rv = self.ds._get_field_info(*field)(gen_obj)
return rv
def _count_particles(self, ftype):
@@ -413,14 +413,14 @@
data_collection.append(gdata)
def __reduce__(self):
- args = tuple([self.pf._hash(), self._type_name] +
+ args = tuple([self.ds._hash(), self._type_name] +
[getattr(self, n) for n in self._con_args] +
[self.field_parameters])
return (_reconstruct_object, args)
def __repr__(self):
# We'll do this the slow way to be clear what's going on
- s = "%s (%s): " % (self.__class__.__name__, self.pf)
+ s = "%s (%s): " % (self.__class__.__name__, self.ds)
s += ", ".join(["%s=%s" % (i, getattr(self,i))
for i in self._con_args])
return s
@@ -458,19 +458,19 @@
not isinstance(field[1], types.StringTypes):
raise YTFieldNotParseable(field)
ftype, fname = field
- finfo = self.pf._get_field_info(ftype, fname)
+ finfo = self.ds._get_field_info(ftype, fname)
else:
fname = field
- finfo = self.pf._get_field_info("unknown", fname)
+ finfo = self.ds._get_field_info("unknown", fname)
if finfo.particle_type:
ftype = self._current_particle_type
else:
ftype = self._current_fluid_type
- if (ftype, fname) not in self.pf.field_info:
- ftype = self.pf._last_freq[0]
- if finfo.particle_type and ftype not in self.pf.particle_types:
+ if (ftype, fname) not in self.ds.field_info:
+ ftype = self.ds._last_freq[0]
+ if finfo.particle_type and ftype not in self.ds.particle_types:
raise YTFieldTypeNotFound(ftype)
- elif not finfo.particle_type and ftype not in self.pf.fluid_types:
+ elif not finfo.particle_type and ftype not in self.ds.fluid_types:
raise YTFieldTypeNotFound(ftype)
explicit_fields.append((ftype, fname))
return explicit_fields
@@ -480,7 +480,7 @@
@property
def tiles(self):
if self._tree is not None: return self._tree
- self._tree = AMRKDTree(self.pf, data_source=self)
+ self._tree = AMRKDTree(self.ds, data_source=self)
return self._tree
@property
@@ -536,22 +536,22 @@
for field in itertools.cycle(fields_to_get):
if inspected >= len(fields_to_get): break
inspected += 1
- fi = self.pf._get_field_info(*field)
+ fi = self.ds._get_field_info(*field)
if not spatial and any(
isinstance(v, ValidateSpatial) for v in fi.validators):
# We don't want to pre-fetch anything that's spatial, as that
# will be done later.
continue
- fd = self.pf.field_dependencies.get(field, None) or \
- self.pf.field_dependencies.get(field[1], None)
+ fd = self.ds.field_dependencies.get(field, None) or \
+ self.ds.field_dependencies.get(field[1], None)
# This is long overdue. Any time we *can't* find a field
# dependency -- for instance, if the derived field has been added
- # after parameter file instantiation -- let's just try to
+ # after dataset instantiation -- let's just try to
# recalculate it.
if fd is None:
try:
- fd = fi.get_dependencies(pf = self.pf)
- self.pf.field_dependencies[field] = fd
+ fd = fi.get_dependencies(ds = self.ds)
+ self.ds.field_dependencies[field] = fd
except:
continue
requested = self._determine_fields(list(set(fd.requested)))
@@ -566,14 +566,14 @@
nfields = []
apply_fields = defaultdict(list)
for field in self._determine_fields(fields):
- if field[0] in self.pf.h.filtered_particle_types:
- f = self.pf.known_filters[field[0]]
+ if field[0] in self.ds.filtered_particle_types:
+ f = self.ds.known_filters[field[0]]
apply_fields[field[0]].append(
(f.filtered_type, field[1]))
else:
nfields.append(field)
for filter_type in apply_fields:
- f = self.pf.known_filters[filter_type]
+ f = self.ds.known_filters[filter_type]
with f.apply(self):
self.get_data(apply_fields[filter_type])
fields = nfields
@@ -588,7 +588,7 @@
fields_to_generate = []
for field in self._determine_fields(fields):
if field in self.field_data: continue
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
try:
finfo.check_available(self)
except NeedsGridType:
@@ -610,7 +610,7 @@
fluids, particles = [], []
finfos = {}
for ftype, fname in fields_to_get:
- finfo = self.pf._get_field_info(ftype, fname)
+ finfo = self.ds._get_field_info(ftype, fname)
finfos[ftype, fname] = finfo
if finfo.particle_type:
particles.append((ftype, fname))
@@ -622,12 +622,12 @@
read_fluids, gen_fluids = self.index._read_fluid_fields(
fluids, self, self._current_chunk)
for f, v in read_fluids.items():
- self.field_data[f] = self.pf.arr(v, input_units = finfos[f].units)
+ self.field_data[f] = self.ds.arr(v, input_units = finfos[f].units)
read_particles, gen_particles = self.index._read_particle_fields(
particles, self, self._current_chunk)
for f, v in read_particles.items():
- self.field_data[f] = self.pf.arr(v, input_units = finfos[f].units)
+ self.field_data[f] = self.ds.arr(v, input_units = finfos[f].units)
fields_to_generate += gen_fluids + gen_particles
self._generate_fields(fields_to_generate)
@@ -648,11 +648,11 @@
field = fields_to_generate[index % len(fields_to_generate)]
index += 1
if field in self.field_data: continue
- fi = self.pf._get_field_info(*field)
+ fi = self.ds._get_field_info(*field)
try:
fd = self._generate_field(field)
if type(fd) == np.ndarray:
- fd = self.pf.arr(fd, fi.units)
+ fd = self.ds.arr(fd, fi.units)
if fd is None:
raise RuntimeError
self.field_data[field] = fd
@@ -725,9 +725,9 @@
class YTSelectionContainer1D(YTSelectionContainer):
_spatial = False
- def __init__(self, pf, field_parameters):
+ def __init__(self, ds, field_parameters):
super(YTSelectionContainer1D, self).__init__(
- pf, field_parameters)
+ ds, field_parameters)
self._grids = None
self._sortkey = None
self._sorted = {}
@@ -739,12 +739,12 @@
aligned with any axis.
"""
_spatial = False
- def __init__(self, axis, pf, field_parameters):
+ def __init__(self, axis, ds, field_parameters):
ParallelAnalysisInterface.__init__(self)
super(YTSelectionContainer2D, self).__init__(
- pf, field_parameters)
- # We need the pf, which will exist by now, for fix_axis.
- self.axis = fix_axis(axis, self.pf)
+ ds, field_parameters)
+ # We need the ds, which will exist by now, for fix_axis.
+ self.axis = fix_axis(axis, self.ds)
self.set_field_parameter("axis", axis)
def _convert_field_name(self, field):
@@ -757,7 +757,7 @@
from yt.visualization.plot_window import \
get_window_parameters, PWViewerMPL
from yt.visualization.fixed_resolution import FixedResolutionBuffer
- (bounds, center) = get_window_parameters(axis, center, width, self.pf)
+ (bounds, center) = get_window_parameters(axis, center, width, self.ds)
pw = PWViewerMPL(self, bounds, fields=list(self.fields), origin=origin,
frb_generator=FixedResolutionBuffer,
plot_type=plot_type)
@@ -802,13 +802,13 @@
Examples
--------
- >>> proj = pf.proj("Density", 0)
+ >>> proj = ds.proj("Density", 0)
>>> frb = proj.to_frb( (100.0, 'kpc'), 1024)
>>> write_image(np.log10(frb["Density"]), 'density_100kpc.png')
"""
- if (self.pf.geometry == "cylindrical" and self.axis == 1) or \
- (self.pf.geometry == "polar" and self.axis == 2):
+ if (self.ds.geometry == "cylindrical" and self.axis == 1) or \
+ (self.ds.geometry == "polar" and self.axis == 2):
if center is not None and center != (0.0, 0.0):
raise NotImplementedError(
"Currently we only support images centered at R=0. " +
@@ -825,23 +825,23 @@
if center is None:
center = self.get_field_parameter("center")
if center is None:
- center = (self.pf.domain_right_edge
- + self.pf.domain_left_edge)/2.0
+ center = (self.ds.domain_right_edge
+ + self.ds.domain_left_edge)/2.0
elif iterable(center) and not isinstance(center, YTArray):
- center = self.pf.arr(center, 'code_length')
+ center = self.ds.arr(center, 'code_length')
if iterable(width):
w, u = width
- width = self.pf.quan(w, input_units = u)
+ width = self.ds.quan(w, input_units = u)
if height is None:
height = width
elif iterable(height):
h, u = height
- height = self.pf.quan(w, input_units = u)
+ height = self.ds.quan(w, input_units = u)
if not iterable(resolution):
resolution = (resolution, resolution)
from yt.visualization.fixed_resolution import FixedResolutionBuffer
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
bounds = (center[xax] - width*0.5, center[xax] + width*0.5,
center[yax] - height*0.5, center[yax] + height*0.5)
frb = FixedResolutionBuffer(self, bounds, resolution,
@@ -857,9 +857,9 @@
_key_fields = ['x','y','z','dx','dy','dz']
_spatial = False
_num_ghost_zones = 0
- def __init__(self, center, pf = None, field_parameters = None):
+ def __init__(self, center, ds = None, field_parameters = None):
ParallelAnalysisInterface.__init__(self)
- super(YTSelectionContainer3D, self).__init__(pf, field_parameters)
+ super(YTSelectionContainer3D, self).__init__(ds, field_parameters)
self._set_center(center)
self.coords = None
self._grids = None
@@ -877,12 +877,12 @@
--------
To find the total mass of gas above 10^6 K in your volume:
- >>> pf = load("RedshiftOutput0005")
- >>> ad = pf.h.all_data()
+ >>> ds = load("RedshiftOutput0005")
+ >>> ad = ds.all_data()
>>> cr = ad.cut_region(["obj['Temperature'] > 1e6"])
>>> print cr.quantities["TotalQuantity"]("CellMassMsun")
"""
- cr = self.pf.cut_region(self, field_cuts,
+ cr = self.ds.cut_region(self, field_cuts,
field_parameters = field_parameters)
return cr
@@ -936,7 +936,7 @@
This will create a data object, find a nice value in the center, and
output the vertices to "triangles.obj" after rescaling them.
- >>> dd = pf.h.all_data()
+ >>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> verts = dd.extract_isocontours("Density", rho,
@@ -1048,7 +1048,7 @@
This will create a data object, find a nice value in the center, and
calculate the metal flux over it.
- >>> dd = pf.h.all_data()
+ >>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> flux = dd.calculate_isocontour_flux("Density", rho,
@@ -1126,7 +1126,7 @@
def particles(self):
if self._particle_handler is None:
self._particle_handler = \
- particle_handler_registry[self._type_name](self.pf, self)
+ particle_handler_registry[self._type_name](self.ds, self)
return self._particle_handler
@@ -1139,7 +1139,7 @@
from what might be expected from the geometric volume.
"""
return self.quantities["TotalQuantity"]("CellVolume")[0] * \
- (self.pf[unit] / self.pf['cm']) ** 3.0
+ (self.ds[unit] / self.ds['cm']) ** 3.0
# Many of these items are set up specifically to ensure that
# we are not breaking old pickle files. This means we must only call the
@@ -1149,41 +1149,41 @@
# In the future, this would be better off being set up to more directly
# reference objects or retain state, perhaps with a context manager.
#
-# One final detail: time series or multiple parameter files in a single pickle
+# One final detail: time series or multiple datasets in a single pickle
# seems problematic.
class ReconstructedObject(tuple):
pass
-def _check_nested_args(arg, ref_pf):
+def _check_nested_args(arg, ref_ds):
if not isinstance(arg, (tuple, list, ReconstructedObject)):
return arg
- elif isinstance(arg, ReconstructedObject) and ref_pf == arg[0]:
+ elif isinstance(arg, ReconstructedObject) and ref_ds == arg[0]:
return arg[1]
- narg = [_check_nested_args(a, ref_pf) for a in arg]
+ narg = [_check_nested_args(a, ref_ds) for a in arg]
return narg
-def _get_pf_by_hash(hash):
- from yt.data_objects.static_output import _cached_pfs
- for pf in _cached_pfs.values():
- if pf._hash() == hash: return pf
+def _get_ds_by_hash(hash):
+ from yt.data_objects.static_output import _cached_datasets
+ for ds in _cached_datasets.values():
+ if ds._hash() == hash: return ds
return None
def _reconstruct_object(*args, **kwargs):
- pfid = args[0]
+ dsid = args[0]
dtype = args[1]
- pf = _get_pf_by_hash(pfid)
- if not pf:
- pfs = ParameterFileStore()
- pf = pfs.get_pf_hash(pfid)
+ ds = _get_ds_by_hash(dsid)
+ if not ds:
+ datasets = ParameterFileStore()
+ ds = datasets.get_ds_hash(dsid)
field_parameters = args[-1]
- # will be much nicer when we can do pfid, *a, fp = args
+ # will be much nicer when we can do dsid, *a, fp = args
args = args[2:-1]
- new_args = [_check_nested_args(a, pf) for a in args]
- cls = getattr(pf.h, dtype)
+ new_args = [_check_nested_args(a, ds) for a in args]
+ cls = getattr(ds, dtype)
obj = cls(*new_args)
obj.field_parameters.update(field_parameters)
- return ReconstructedObject((pf, obj))
+ return ReconstructedObject((ds, obj))
class YTBooleanRegionBase(YTSelectionContainer3D):
"""
@@ -1199,20 +1199,20 @@
Examples
--------
- >>> re1 = pf.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4],
+ >>> re1 = ds.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4],
[0.6, 0.6, 0.6])
- >>> re2 = pf.region([0.5, 0.5, 0.5], [0.45, 0.45, 0.45],
+ >>> re2 = ds.region([0.5, 0.5, 0.5], [0.45, 0.45, 0.45],
[0.55, 0.55, 0.55])
- >>> sp1 = pf.sphere([0.575, 0.575, 0.575], .03)
- >>> toroid_shape = pf.boolean([re1, "NOT", re2])
- >>> toroid_shape_with_hole = pf.boolean([re1, "NOT", "(", re2, "OR",
+ >>> sp1 = ds.sphere([0.575, 0.575, 0.575], .03)
+ >>> toroid_shape = ds.boolean([re1, "NOT", re2])
+ >>> toroid_shape_with_hole = ds.boolean([re1, "NOT", "(", re2, "OR",
sp1, ")"])
"""
_type_name = "boolean"
_con_args = ("regions",)
- def __init__(self, regions, fields = None, pf = None, **kwargs):
+ def __init__(self, regions, fields = None, ds = None, **kwargs):
# Center is meaningless, but we'll define it all the same.
- YTSelectionContainer3D.__init__(self, [0.5]*3, fields, pf, **kwargs)
+ YTSelectionContainer3D.__init__(self, [0.5]*3, fields, ds, **kwargs)
self.regions = regions
self._all_regions = []
self._some_overlap = []
@@ -1268,7 +1268,7 @@
def __repr__(self):
# We'll do this the slow way to be clear what's going on
- s = "%s (%s): " % (self.__class__.__name__, self.pf)
+ s = "%s (%s): " % (self.__class__.__name__, self.ds)
s += "["
for i, region in enumerate(self.regions):
if region in ["OR", "AND", "NOT", "(", ")"]:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/derived_quantities.py Sun Jun 15 19:50:51 2014 -0700
@@ -63,7 +63,7 @@
for i in range(self.num_vals):
values[i].append(storage[key][i])
# These will be YTArrays
- values = [self.data_source.pf.arr(values[i]) for i in range(self.num_vals)]
+ values = [self.data_source.ds.arr(values[i]) for i in range(self.num_vals)]
values = self.reduce_intermediate(values)
return values
@@ -108,8 +108,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.weighted_average_quantity([("gas", "density"),
... ("gas", "temperature")],
... ("gas", "cell_mass"))
@@ -147,8 +147,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.total_quantity([("gas", "cell_mass")])
"""
@@ -177,14 +177,14 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.total_mass()
"""
def __call__(self):
- self.data_source.pf.index
- fi = self.data_source.pf.field_info
+ self.data_source.ds.index
+ fi = self.data_source.ds.field_info
fields = []
if ("gas", "cell_mass") in fi:
fields.append(("gas", "cell_mass"))
@@ -213,16 +213,16 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.center_of_mass()
"""
def count_values(self, use_gas = True, use_particles = False):
use_gas &= \
- (("gas", "cell_mass") in self.data_source.pf.field_info)
+ (("gas", "cell_mass") in self.data_source.ds.field_info)
use_particles &= \
- (("all", "particle_mass") in self.data_source.pf.field_info)
+ (("all", "particle_mass") in self.data_source.ds.field_info)
self.num_vals = 0
if use_gas:
self.num_vals += 4
@@ -231,9 +231,9 @@
def process_chunk(self, data, use_gas = True, use_particles = False):
use_gas &= \
- (("gas", "cell_mass") in self.data_source.pf.field_info)
+ (("gas", "cell_mass") in self.data_source.ds.field_info)
use_particles &= \
- (("all", "particle_mass") in self.data_source.pf.field_info)
+ (("all", "particle_mass") in self.data_source.ds.field_info)
vals = []
if use_gas:
vals += [(data[ax] * data["cell_mass"]).sum(dtype=np.float64)
@@ -282,8 +282,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.bulk_velocity()
"""
@@ -344,8 +344,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.weighted_variance([("gas", "density"),
... ("gas", "temperature")],
... ("gas", "cell_mass"))
@@ -407,16 +407,16 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.angular_momentum_vector()
"""
def count_values(self, use_gas=True, use_particles=True):
use_gas &= \
- (("gas", "cell_mass") in self.data_source.pf.field_info)
+ (("gas", "cell_mass") in self.data_source.ds.field_info)
use_particles &= \
- (("all", "particle_mass") in self.data_source.pf.field_info)
+ (("all", "particle_mass") in self.data_source.ds.field_info)
num_vals = 0
if use_gas: num_vals += 4
if use_particles: num_vals += 4
@@ -424,9 +424,9 @@
def process_chunk(self, data, use_gas=True, use_particles=True):
use_gas &= \
- (("gas", "cell_mass") in self.data_source.pf.field_info)
+ (("gas", "cell_mass") in self.data_source.ds.field_info)
use_particles &= \
- (("all", "particle_mass") in self.data_source.pf.field_info)
+ (("all", "particle_mass") in self.data_source.ds.field_info)
rvals = []
if use_gas:
rvals.extend([(data["gas", "specific_angular_momentum_%s" % axis] *
@@ -467,8 +467,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.extrema([("gas", "density"),
... ("gas", "temperature")])
@@ -513,8 +513,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.max_location(("gas", "density"))
"""
@@ -556,8 +556,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.min_location(("gas", "density"))
"""
@@ -612,8 +612,8 @@
Examples
--------
- >>> pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- >>> ad = pf.all_data()
+ >>> ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ >>> ad = ds.all_data()
>>> print ad.quantities.center_of_mass()
"""
@@ -622,12 +622,12 @@
def process_chunk(self, data, use_gas=True, use_particles=True):
use_gas &= \
- (("gas", "cell_mass") in self.data_source.pf.field_info)
+ (("gas", "cell_mass") in self.data_source.ds.field_info)
use_particles &= \
- (("all", "particle_mass") in self.data_source.pf.field_info)
- e = data.pf.quan(0., "erg")
- j = data.pf.quan(0., "g*cm**2/s")
- m = data.pf.quan(0., "g")
+ (("all", "particle_mass") in self.data_source.ds.field_info)
+ e = data.ds.quan(0., "erg")
+ j = data.ds.quan(0., "g*cm**2/s")
+ m = data.ds.quan(0., "g")
if use_gas:
e += (data["gas", "kinetic_energy"] *
data["index", "cell_volume"]).sum(dtype=np.float64)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/grid_patch.py
--- a/yt/data_objects/grid_patch.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/grid_patch.py Sun Jun 15 19:50:51 2014 -0700
@@ -55,7 +55,7 @@
self.field_parameters = {}
self.id = id
self._child_mask = self._child_indices = self._child_index_mask = None
- self.pf = index.parameter_file
+ self.ds = index.dataset
self._index = index
self.start_index = None
self.filename = filename
@@ -63,7 +63,7 @@
self._last_count = -1
self._last_selector_id = None
self._current_particle_type = 'all'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
def get_global_startindex(self):
"""
@@ -74,7 +74,7 @@
if self.start_index is not None:
return self.start_index
if self.Parent is None:
- left = self.LeftEdge - self.pf.domain_left_edge
+ left = self.LeftEdge - self.ds.domain_left_edge
start_index = left / self.dds
return np.rint(start_index).astype('int64').ravel().view(np.ndarray)
@@ -82,7 +82,7 @@
di = np.rint( (self.LeftEdge.ndarray_view() -
self.Parent.LeftEdge.ndarray_view()) / pdx)
start_index = self.Parent.get_global_startindex() + di
- self.start_index = (start_index * self.pf.refine_by).astype('int64').ravel()
+ self.start_index = (start_index * self.ds.refine_by).astype('int64').ravel()
return self.start_index
def __getitem__(self, key):
@@ -91,7 +91,7 @@
fields = self._determine_fields(key)
except YTFieldTypeNotFound:
return tr
- finfo = self.pf._get_field_info(*fields[0])
+ finfo = self.ds._get_field_info(*fields[0])
if not finfo.particle_type:
return tr.reshape(self.ActiveDimensions)
return tr
@@ -102,7 +102,7 @@
either returns the multiplicative factor or throws a KeyError.
"""
- return self.pf[datatype]
+ return self.ds[datatype]
@property
def shape(self):
@@ -134,16 +134,16 @@
# that dx=dy=dz, at least here. We probably do elsewhere.
id = self.id - self._id_offset
if self.Parent is not None:
- self.dds = self.Parent.dds.ndarray_view() / self.pf.refine_by
+ self.dds = self.Parent.dds.ndarray_view() / self.ds.refine_by
else:
LE, RE = self.index.grid_left_edge[id,:], \
self.index.grid_right_edge[id,:]
self.dds = (RE - LE) / self.ActiveDimensions
- if self.pf.dimensionality < 2:
- self.dds[1] = self.pf.domain_right_edge[1] - self.pf.domain_left_edge[1]
- if self.pf.dimensionality < 3:
- self.dds[2] = self.pf.domain_right_edge[2] - self.pf.domain_left_edge[2]
- self.dds = self.pf.arr(self.dds, "code_length")
+ if self.ds.dimensionality < 2:
+ self.dds[1] = self.ds.domain_right_edge[1] - self.ds.domain_left_edge[1]
+ if self.ds.dimensionality < 3:
+ self.dds[2] = self.ds.domain_right_edge[2] - self.ds.domain_left_edge[2]
+ self.dds = self.ds.arr(self.dds, "code_length")
def __repr__(self):
return "AMRGridPatch_%04i" % (self.id)
@@ -181,7 +181,7 @@
return pos
def _fill_child_mask(self, child, mask, tofill, dlevel = 1):
- rf = self.pf.refine_by
+ rf = self.ds.refine_by
if dlevel != 1:
rf = rf**dlevel
gi, cgi = self.get_global_startindex(), child.get_global_startindex()
@@ -231,8 +231,8 @@
# than the grid by nZones*dx in each direction
nl = self.get_global_startindex() - n_zones
nr = nl + self.ActiveDimensions + 2 * n_zones
- new_left_edge = nl * self.dds + self.pf.domain_left_edge
- new_right_edge = nr * self.dds + self.pf.domain_left_edge
+ new_left_edge = nl * self.dds + self.ds.domain_left_edge
+ new_right_edge = nr * self.dds + self.ds.domain_left_edge
# Something different needs to be done for the root grid, though
level = self.Level
@@ -247,12 +247,12 @@
field_parameters = {}
field_parameters.update(self.field_parameters)
if smoothed:
- cube = self.pf.smoothed_covering_grid(
+ cube = self.ds.smoothed_covering_grid(
level, new_left_edge,
field_parameters = field_parameters,
**kwargs)
else:
- cube = self.pf.covering_grid(level, new_left_edge,
+ cube = self.ds.covering_grid(level, new_left_edge,
field_parameters = field_parameters,
**kwargs)
cube._base_grid = self
@@ -272,7 +272,7 @@
new_field[1:,1:,:-1] += of
new_field[1:,1:,1:] += of
np.multiply(new_field, 0.125, new_field)
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
if finfo.take_log:
new_field = np.log10(new_field)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/image_array.py
--- a/yt/data_objects/image_array.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/image_array.py Sun Jun 15 19:50:51 2014 -0700
@@ -53,7 +53,7 @@
Examples
--------
These are written in doctest format, and should illustrate how to
- use the function. Use the variables 'pf' for the parameter file, 'pc' for
+ use the function. Use the variables 'ds' for the dataset, 'pc' for
a plot collection, 'c' for a center, and 'L' for a vector.
>>> im = np.zeros([64,128,3])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/octree_subset.py Sun Jun 15 19:50:51 2014 -0700
@@ -51,24 +51,24 @@
_num_ghost_zones = 0
_type_name = 'octree_subset'
_skip_add = True
- _con_args = ('base_region', 'domain', 'pf')
+ _con_args = ('base_region', 'domain', 'ds')
_domain_offset = 0
_cell_count = -1
- def __init__(self, base_region, domain, pf, over_refine_factor = 1):
+ def __init__(self, base_region, domain, ds, over_refine_factor = 1):
self._num_zones = 1 << (over_refine_factor)
self._oref = over_refine_factor
self.field_data = YTFieldData()
self.field_parameters = {}
self.domain = domain
self.domain_id = domain.domain_id
- self.pf = domain.pf
- self._index = self.pf.index
+ self.ds = domain.ds
+ self._index = self.ds.index
self.oct_handler = domain.oct_handler
self._last_mask = None
self._last_selector_id = None
self._current_particle_type = 'all'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
self.base_region = base_region
self.base_selector = base_region.selector
@@ -78,7 +78,7 @@
fields = self._determine_fields(key)
except YTFieldTypeNotFound:
return tr
- finfo = self.pf._get_field_info(*fields[0])
+ finfo = self.ds._get_field_info(*fields[0])
if not finfo.particle_type:
# We may need to reshape the field, if it is being queried from
# field_data. If it's already cached, it just passes through.
@@ -170,12 +170,12 @@
if create_octree:
morton = compute_morton(
positions[:,0], positions[:,1], positions[:,2],
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
morton.sort()
particle_octree = ParticleOctreeContainer([1, 1, 1],
- self.pf.domain_left_edge,
- self.pf.domain_right_edge,
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge,
over_refine = self._oref)
particle_octree.n_ref = nneighbors / 2
particle_octree.add(morton)
@@ -198,7 +198,7 @@
positions.shape[0], nvals[-1])
op.process_octree(self.oct_handler, mdom_ind, positions,
self.fcoords, fields,
- self.domain_id, self._domain_offset, self.pf.periodicity,
+ self.domain_id, self._domain_offset, self.ds.periodicity,
index_fields, particle_octree, pdom_ind)
vals = op.finalize()
if vals is None: return
@@ -251,9 +251,9 @@
# octree may multiply include data files. While we can attempt to mitigate
# this, it's unavoidable for many types of data storage on disk.
_type_name = 'indexed_octree_subset'
- _con_args = ('data_files', 'pf', 'min_ind', 'max_ind')
+ _con_args = ('data_files', 'ds', 'min_ind', 'max_ind')
domain_id = -1
- def __init__(self, base_region, data_files, pf, min_ind = 0, max_ind = 0,
+ def __init__(self, base_region, data_files, ds, min_ind = 0, max_ind = 0,
over_refine_factor = 1):
# The first attempt at this will not work in parallel.
self._num_zones = 1 << (over_refine_factor)
@@ -261,16 +261,16 @@
self.data_files = data_files
self.field_data = YTFieldData()
self.field_parameters = {}
- self.pf = pf
- self._index = self.pf.index
- self.oct_handler = pf.index.oct_handler
+ self.ds = ds
+ self._index = self.ds.index
+ self.oct_handler = ds.index.oct_handler
self.min_ind = min_ind
if max_ind == 0: max_ind = (1 << 63)
self.max_ind = max_ind
self._last_mask = None
self._last_selector_id = None
self._current_particle_type = 'all'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
self.base_region = base_region
self.base_selector = base_region.selector
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/particle_io.py
--- a/yt/data_objects/particle_io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/particle_io.py Sun Jun 15 19:50:51 2014 -0700
@@ -41,8 +41,8 @@
class ParticleIOHandler(object):
_source_type = None
- def __init__(self, pf, source):
- self.pf = pf
+ def __init__(self, ds, source):
+ self.ds = ds
self.source = source
def __getitem__(self, key):
@@ -61,8 +61,8 @@
def get_data(self, fields):
mylog.info("Getting %s using ParticleIO" % str(fields))
fields = ensure_list(fields)
- if not self.pf.h.io._particle_reader:
- mylog.info("not self.pf.h.io._particle_reader")
+ if not self.ds.index.io._particle_reader:
+ mylog.info("not self.ds.index.io._particle_reader")
return self.source.get_data(fields)
rtype, args = self._get_args()
count_list, grid_list = [], []
@@ -77,8 +77,8 @@
fields_to_read = []
conv_factors = []
for field in fields:
- f = self.pf.field_info[field]
- to_add = f.get_dependencies(pf = self.pf).requested
+ f = self.ds.field_info[field]
+ to_add = f.get_dependencies(ds = self.ds).requested
to_add = list(np.unique(to_add))
if len(to_add) != 1: raise KeyError
fields_to_read += to_add
@@ -92,7 +92,7 @@
count=len(grid_list), dtype='float64'))
conv_factors = np.array(conv_factors).transpose()
self.conv_factors = conv_factors
- rvs = self.pf.h.io._read_particles(
+ rvs = self.ds.index.io._read_particles(
fields_to_read, rtype, args, grid_list, count_list,
conv_factors)
for [n, v] in zip(fields, rvs):
@@ -102,14 +102,14 @@
periodic = False
_source_type = "region"
- def __init__(self, pf, source):
+ def __init__(self, ds, source):
self.left_edge = source.left_edge
self.right_edge = source.right_edge
- ParticleIOHandler.__init__(self, pf, source)
+ ParticleIOHandler.__init__(self, ds, source)
def _get_args(self):
- DLE = np.array(self.pf.domain_left_edge, dtype='float64')
- DRE = np.array(self.pf.domain_right_edge, dtype='float64')
+ DLE = np.array(self.ds.domain_left_edge, dtype='float64')
+ DRE = np.array(self.ds.domain_right_edge, dtype='float64')
args = (np.array(self.left_edge), np.array(self.right_edge),
int(self.periodic), DLE, DRE)
return (0, args)
@@ -117,26 +117,26 @@
class ParticleIOHandlerSphere(ParticleIOHandlerImplemented):
_source_type = "sphere"
- def __init__(self, pf, source):
+ def __init__(self, ds, source):
self.center = source.center
self.radius = source.radius
- ParticleIOHandler.__init__(self, pf, source)
+ ParticleIOHandler.__init__(self, ds, source)
def _get_args(self):
- DLE = np.array(self.pf.domain_left_edge, dtype='float64')
- DRE = np.array(self.pf.domain_right_edge, dtype='float64')
+ DLE = np.array(self.ds.domain_left_edge, dtype='float64')
+ DRE = np.array(self.ds.domain_right_edge, dtype='float64')
return (1, (np.array(self.center, dtype='float64'), self.radius,
1, DLE, DRE))
class ParticleIOHandlerDisk(ParticleIOHandlerImplemented):
_source_type = "disk"
- def __init__(self, pf, source):
+ def __init__(self, ds, source):
self.center = source.center
self.normal = source._norm_vec
self.radius = source._radius
self.height = source._height
- ParticleIOHandler.__init__(self, pf, source)
+ ParticleIOHandler.__init__(self, ds, source)
def _get_args(self):
args = (np.array(self.center, dtype='float64'),
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/profiles.py Sun Jun 15 19:50:51 2014 -0700
@@ -51,12 +51,12 @@
def __init__(self, data_source):
ParallelAnalysisInterface.__init__(self)
self._data_source = data_source
- self.pf = data_source.pf
+ self.ds = data_source.ds
self.field_data = YTFieldData()
@property
def index(self):
- return self.pf.index
+ return self.ds.index
def _get_dependencies(self, fields):
return ParallelAnalysisInterface._get_dependencies(
@@ -756,7 +756,7 @@
class ProfileND(ParallelAnalysisInterface):
def __init__(self, data_source, weight_field = None):
self.data_source = data_source
- self.pf = data_source.pf
+ self.ds = data_source.ds
self.field_data = YTFieldData()
self.weight_field = weight_field
self.field_units = {}
@@ -784,12 +784,12 @@
"""
if field in self.field_units:
self.field_units[field] = \
- Unit(new_unit, registry=self.pf.unit_registry)
+ Unit(new_unit, registry=self.ds.unit_registry)
else:
fd = self.field_map[field]
if fd in self.field_units:
self.field_units[fd] = \
- Unit(new_unit, registry=self.pf.unit_registry)
+ Unit(new_unit, registry=self.ds.unit_registry)
else:
raise KeyError("%s not in profile!" % (field))
@@ -1114,8 +1114,8 @@
Create a 1d profile. Access bin field from profile.x and field
data from profile.field_data.
- >>> pf = load("DD0046/DD0046")
- >>> ad = pf.h.all_data()
+ >>> ds = load("DD0046/DD0046")
+ >>> ad = ds.all_data()
>>> extrema = {"density": (1.0e-30, 1.0e-25)}
>>> profile = create_profile(ad, ["density"], extrema=extrema,
... fields=["temperature", "velocity_x"]))
@@ -1142,7 +1142,7 @@
if not iterable(accumulation):
accumulation = [accumulation] * len(bin_fields)
if logs is None:
- logs = [data_source.pf._get_field_info(f[0],f[1]).take_log
+ logs = [data_source.ds._get_field_info(f[0],f[1]).take_log
for f in bin_fields]
else:
logs = [logs[bin_field[-1]] for bin_field in bin_fields]
@@ -1152,17 +1152,17 @@
else:
ex = []
for bin_field in bin_fields:
- bf_units = data_source.pf._get_field_info(bin_field[0],
+ bf_units = data_source.ds._get_field_info(bin_field[0],
bin_field[1]).units
try:
field_ex = list(extrema[bin_field[-1]])
except KeyError:
field_ex = list(extrema[bin_field])
if iterable(field_ex[0]):
- field_ex[0] = data_source.pf.quan(field_ex[0][0], field_ex[0][1])
+ field_ex[0] = data_source.ds.quan(field_ex[0][0], field_ex[0][1])
field_ex[0] = field_ex[0].in_units(bf_units)
if iterable(field_ex[1]):
- field_ex[1] = data_source.pf.quan(field_ex[1][0], field_ex[1][1])
+ field_ex[1] = data_source.ds.quan(field_ex[1][0], field_ex[1][1])
field_ex[1] = field_ex[1].in_units(bf_units)
ex.append(field_ex)
args = [data_source]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/selection_data_containers.py Sun Jun 15 19:50:51 2014 -0700
@@ -60,18 +60,18 @@
Examples
--------
- >>> pf = load("RedshiftOutput0005")
- >>> oray = pf.ortho_ray(0, (0.2, 0.74))
+ >>> ds = load("RedshiftOutput0005")
+ >>> oray = ds.ortho_ray(0, (0.2, 0.74))
>>> print oray["Density"]
"""
_key_fields = ['x','y','z','dx','dy','dz']
_type_name = "ortho_ray"
_con_args = ('axis', 'coords')
- def __init__(self, axis, coords, pf=None, field_parameters=None):
- super(YTOrthoRayBase, self).__init__(pf, field_parameters)
+ def __init__(self, axis, coords, ds=None, field_parameters=None):
+ super(YTOrthoRayBase, self).__init__(ds, field_parameters)
self.axis = axis
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
self.px_ax = xax
self.py_ax = yax
# Even though we may not be using x,y,z we use them here.
@@ -109,18 +109,18 @@
Examples
--------
- >>> pf = load("RedshiftOutput0005")
- >>> ray = pf.ray((0.2, 0.74, 0.11), (0.4, 0.91, 0.31))
+ >>> ds = load("RedshiftOutput0005")
+ >>> ray = ds.ray((0.2, 0.74, 0.11), (0.4, 0.91, 0.31))
>>> print ray["Density"], ray["t"], ray["dts"]
"""
_type_name = "ray"
_con_args = ('start_point', 'end_point')
_container_fields = ("t", "dts")
- def __init__(self, start_point, end_point, pf=None, field_parameters=None):
- super(YTRayBase, self).__init__(pf, field_parameters)
- self.start_point = self.pf.arr(start_point,
+ def __init__(self, start_point, end_point, ds=None, field_parameters=None):
+ super(YTRayBase, self).__init__(ds, field_parameters)
+ self.start_point = self.ds.arr(start_point,
'code_length', dtype='float64')
- self.end_point = self.pf.arr(end_point,
+ self.end_point = self.ds.arr(end_point,
'code_length', dtype='float64')
self.vec = self.end_point - self.start_point
#self.vec /= np.sqrt(np.dot(self.vec, self.vec))
@@ -160,8 +160,8 @@
center : array_like, optional
The 'center' supplied to fields that use it. Note that this does
not have to have `coord` as one value. Strictly optional.
- pf: Dataset, optional
- An optional dataset to use rather than self.pf
+ ds: Dataset, optional
+ An optional dataset to use rather than self.ds
field_parameters : dictionary
A dictionary of field parameters than can be accessed by derived
fields.
@@ -169,8 +169,8 @@
Examples
--------
- >>> pf = load("RedshiftOutput0005")
- >>> slice = pf.slice(0, 0.25)
+ >>> ds = load("RedshiftOutput0005")
+ >>> slice = ds.slice(0, 0.25)
>>> print slice["Density"]
"""
_top_node = "/Slices"
@@ -178,15 +178,15 @@
_con_args = ('axis', 'coord')
_container_fields = ("px", "py", "pdx", "pdy")
- def __init__(self, axis, coord, center=None, pf=None,
+ def __init__(self, axis, coord, center=None, ds=None,
field_parameters = None):
- YTSelectionContainer2D.__init__(self, axis, pf, field_parameters)
+ YTSelectionContainer2D.__init__(self, axis, ds, field_parameters)
self._set_center(center)
self.coord = coord
def _generate_container_field(self, field):
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
if self._current_chunk is None:
self.index._identify_base_chunk(self)
if field == "px":
@@ -255,8 +255,8 @@
Examples
--------
- >>> pf = load("RedshiftOutput0005")
- >>> cp = pf.cutting([0.1, 0.2, -0.9], [0.5, 0.42, 0.6])
+ >>> ds = load("RedshiftOutput0005")
+ >>> cp = ds.cutting([0.1, 0.2, -0.9], [0.5, 0.42, 0.6])
>>> print cp["Density"]
"""
_plane = None
@@ -266,9 +266,9 @@
_con_args = ('normal', 'center')
_container_fields = ("px", "py", "pz", "pdx", "pdy", "pdz")
- def __init__(self, normal, center, pf = None,
+ def __init__(self, normal, center, ds = None,
north_vector = None, field_parameters = None):
- YTSelectionContainer2D.__init__(self, 4, pf, field_parameters)
+ YTSelectionContainer2D.__init__(self, 4, ds, field_parameters)
self._set_center(center)
self.set_field_parameter('center',center)
# Let's set up our plane equation
@@ -322,21 +322,21 @@
Examples
--------
- >>> v, c = pf.h.find_max("Density")
- >>> sp = pf.sphere(c, (100.0, 'au'))
- >>> L = sp.quantities["AngularMomentumVector"]()
- >>> cutting = pf.cutting(L, c)
+ >>> v, c = ds.find_max("density")
+ >>> sp = ds.sphere(c, (100.0, 'au'))
+ >>> L = sp.quantities.angular_momentum_vector()
+ >>> cutting = ds.cutting(L, c)
>>> frb = cutting.to_frb( (1.0, 'pc'), 1024)
>>> write_image(np.log10(frb["Density"]), 'density_1pc.png')
"""
if iterable(width):
w, u = width
- width = self.pf.quan(w, input_units = u)
+ width = self.ds.quan(w, input_units = u)
if height is None:
height = width
elif iterable(height):
h, u = height
- height = self.pf.quan(w, input_units = u)
+ height = self.ds.quan(w, input_units = u)
if not iterable(resolution):
resolution = (resolution, resolution)
from yt.visualization.fixed_resolution import ObliqueFixedResolutionBuffer
@@ -353,7 +353,7 @@
y = self._current_chunk.fcoords[:,1] - self.center[1]
z = self._current_chunk.fcoords[:,2] - self.center[2]
tr = np.zeros(x.size, dtype='float64')
- tr = self.pf.arr(tr, "code_length")
+ tr = self.ds.arr(tr, "code_length")
tr += x * self._x_vec[0]
tr += y * self._x_vec[1]
tr += z * self._x_vec[2]
@@ -363,7 +363,7 @@
y = self._current_chunk.fcoords[:,1] - self.center[1]
z = self._current_chunk.fcoords[:,2] - self.center[2]
tr = np.zeros(x.size, dtype='float64')
- tr = self.pf.arr(tr, "code_length")
+ tr = self.ds.arr(tr, "code_length")
tr += x * self._y_vec[0]
tr += y * self._y_vec[1]
tr += z * self._y_vec[2]
@@ -373,7 +373,7 @@
y = self._current_chunk.fcoords[:,1] - self.center[1]
z = self._current_chunk.fcoords[:,2] - self.center[2]
tr = np.zeros(x.size, dtype='float64')
- tr = self.pf.arr(tr, "code_length")
+ tr = self.ds.arr(tr, "code_length")
tr += x * self._norm_vec[0]
tr += y * self._norm_vec[1]
tr += z * self._norm_vec[2]
@@ -401,7 +401,7 @@
if k not in self._key_fields]
from yt.visualization.plot_window import get_oblique_window_parameters, PWViewerMPL
from yt.visualization.fixed_resolution import ObliqueFixedResolutionBuffer
- (bounds, center_rot) = get_oblique_window_parameters(normal, center, width, self.pf)
+ (bounds, center_rot) = get_oblique_window_parameters(normal, center, width, self.ds)
pw = PWViewerMPL(
self, bounds, fields=self.fields, origin='center-window',
periodic=False, oblique=True,
@@ -442,21 +442,21 @@
Examples
--------
- >>> v, c = pf.h.find_max("Density")
- >>> sp = pf.sphere(c, (100.0, 'au'))
- >>> L = sp.quantities["AngularMomentumVector"]()
- >>> cutting = pf.cutting(L, c)
+ >>> v, c = ds.find_max("density")
+ >>> sp = ds.sphere(c, (100.0, 'au'))
+ >>> L = sp.quantities.angular_momentum_vector()
+ >>> cutting = ds.cutting(L, c)
>>> frb = cutting.to_frb( (1.0, 'pc'), 1024)
>>> write_image(np.log10(frb["Density"]), 'density_1pc.png')
"""
if iterable(width):
validate_width_tuple(width)
- width = self.pf.quan(width[0], width[1])
+ width = self.ds.quan(width[0], width[1])
if height is None:
height = width
elif iterable(height):
validate_width_tuple(height)
- height = self.pf.quan(height[0], height[1])
+ height = self.ds.quan(height[0], height[1])
if not iterable(resolution):
resolution = (resolution, resolution)
from yt.visualization.fixed_resolution import ObliqueFixedResolutionBuffer
@@ -473,12 +473,12 @@
_type_name = "disk"
_con_args = ('center', '_norm_vec', '_radius', '_height')
def __init__(self, center, normal, radius, height, fields=None,
- pf=None, **kwargs):
- YTSelectionContainer3D.__init__(self, center, fields, pf, **kwargs)
+ ds=None, **kwargs):
+ YTSelectionContainer3D.__init__(self, center, fields, ds, **kwargs)
self._norm_vec = np.array(normal)/np.sqrt(np.dot(normal,normal))
self.set_field_parameter("normal", self._norm_vec)
- self._height = fix_length(height, self.pf)
- self._radius = fix_length(radius, self.pf)
+ self._height = fix_length(height, self.ds)
+ self._radius = fix_length(radius, self.ds)
self._d = -1.0 * np.dot(self._norm_vec, self.center)
@@ -503,14 +503,14 @@
_type_name = "region"
_con_args = ('center', 'left_edge', 'right_edge')
def __init__(self, center, left_edge, right_edge, fields = None,
- pf = None, **kwargs):
- YTSelectionContainer3D.__init__(self, center, fields, pf, **kwargs)
+ ds = None, **kwargs):
+ YTSelectionContainer3D.__init__(self, center, fields, ds, **kwargs)
if not isinstance(left_edge, YTArray):
- self.left_edge = self.pf.arr(left_edge, 'code_length')
+ self.left_edge = self.ds.arr(left_edge, 'code_length')
else:
self.left_edge = left_edge
if not isinstance(right_edge, YTArray):
- self.right_edge = self.pf.arr(right_edge, 'code_length')
+ self.right_edge = self.ds.arr(right_edge, 'code_length')
else:
self.right_edge = right_edge
@@ -521,8 +521,8 @@
"""
_type_name = "data_collection"
_con_args = ("_obj_list",)
- def __init__(self, center, obj_list, pf = None, field_parameters = None):
- YTSelectionContainer3D.__init__(self, center, pf, field_parameters)
+ def __init__(self, center, obj_list, ds = None, field_parameters = None):
+ YTSelectionContainer3D.__init__(self, center, ds, field_parameters)
self._obj_ids = np.array([o.id - o._id_offset for o in obj_list],
dtype="int64")
self._obj_list = obj_list
@@ -540,18 +540,18 @@
Examples
--------
- >>> pf = load("DD0010/moving7_0010")
+ >>> ds = load("DD0010/moving7_0010")
>>> c = [0.5,0.5,0.5]
- >>> sphere = pf.sphere(c,1.*pf['kpc'])
+ >>> sphere = ds.sphere(c,1.*ds['kpc'])
"""
_type_name = "sphere"
_con_args = ('center', 'radius')
- def __init__(self, center, radius, pf = None, field_parameters = None):
- super(YTSphereBase, self).__init__(center, pf, field_parameters)
+ def __init__(self, center, radius, ds = None, field_parameters = None):
+ super(YTSphereBase, self).__init__(center, ds, field_parameters)
# Unpack the radius, if necessary
- radius = fix_length(radius, self.pf)
+ radius = fix_length(radius, self.ds)
if radius < self.index.get_smallest_dx():
- raise YTSphereTooSmall(pf, radius.in_units("code_length"),
+ raise YTSphereTooSmall(ds, radius.in_units("code_length"),
self.index.get_smallest_dx().in_units("code_length"))
self.set_field_parameter('radius',radius)
self.radius = radius
@@ -582,24 +582,24 @@
the z-axis.
Examples
--------
- >>> pf = load("DD####/DD####")
+ >>> ds = load("DD####/DD####")
>>> c = [0.5,0.5,0.5]
- >>> ell = pf.ellipsoid(c, 0.1, 0.1, 0.1, np.array([0.1, 0.1, 0.1]), 0.2)
+ >>> ell = ds.ellipsoid(c, 0.1, 0.1, 0.1, np.array([0.1, 0.1, 0.1]), 0.2)
"""
_type_name = "ellipsoid"
_con_args = ('center', '_A', '_B', '_C', '_e0', '_tilt')
def __init__(self, center, A, B, C, e0, tilt, fields=None,
- pf=None, field_parameters = None):
- YTSelectionContainer3D.__init__(self, center, pf, field_parameters)
+ ds=None, field_parameters = None):
+ YTSelectionContainer3D.__init__(self, center, ds, field_parameters)
# make sure the magnitudes of semi-major axes are in order
if A<B or B<C:
- raise YTEllipsoidOrdering(pf, A, B, C)
+ raise YTEllipsoidOrdering(ds, A, B, C)
# make sure the smallest side is not smaller than dx
- self._A = self.pf.quan(A, 'code_length')
- self._B = self.pf.quan(B, 'code_length')
- self._C = self.pf.quan(C, 'code_length')
+ self._A = self.ds.quan(A, 'code_length')
+ self._B = self.ds.quan(B, 'code_length')
+ self._C = self.ds.quan(C, 'code_length')
if self._C < self.index.get_smallest_dx():
- raise YTSphereTooSmall(pf, self._C, self.index.get_smallest_dx())
+ raise YTSphereTooSmall(ds, self._C, self.index.get_smallest_dx())
self._e0 = e0 = e0 / (e0**2.0).sum()**0.5
self._tilt = tilt
@@ -651,15 +651,15 @@
Examples
--------
- >>> pf = load("DD0010/moving7_0010")
- >>> sp = pf.sphere("max", (1.0, 'mpc'))
- >>> cr = pf.cut_region(sp, ["obj['temperature'] < 1e3"])
+ >>> ds = load("DD0010/moving7_0010")
+ >>> sp = ds.sphere("max", (1.0, 'mpc'))
+ >>> cr = ds.cut_region(sp, ["obj['temperature'] < 1e3"])
"""
_type_name = "cut_region"
_con_args = ("base_object", "conditionals")
- def __init__(self, base_object, conditionals, pf = None,
+ def __init__(self, base_object, conditionals, ds = None,
field_parameters = None):
- super(YTCutRegionBase, self).__init__(base_object.center, pf, field_parameters)
+ super(YTCutRegionBase, self).__init__(base_object.center, ds, field_parameters)
self.conditionals = ensure_list(conditionals)
self.base_object = base_object
self._selector = None
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/static_output.py Sun Jun 15 19:50:51 2014 -0700
@@ -62,12 +62,12 @@
# When such a thing comes to pass, I'll move all the stuff that is contant up
# to here, and then have it instantiate EnzoDatasets as appropriate.
-_cached_pfs = weakref.WeakValueDictionary()
-_pf_store = ParameterFileStore()
+_cached_datasets = weakref.WeakValueDictionary()
+_ds_store = ParameterFileStore()
-def _unsupported_object(pf, obj_name):
+def _unsupported_object(ds, obj_name):
def _raise_unsupp(*args, **kwargs):
- raise YTObjectNotImplemented(pf, obj_name)
+ raise YTObjectNotImplemented(ds, obj_name)
return _raise_unsupp
class RegisteredDataset(type):
@@ -139,12 +139,12 @@
return obj
apath = os.path.abspath(filename)
#if not os.path.exists(apath): raise IOError(filename)
- if apath not in _cached_pfs:
+ if apath not in _cached_datasets:
obj = object.__new__(cls)
if obj._skip_cache is False:
- _cached_pfs[apath] = obj
+ _cached_datasets[apath] = obj
else:
- obj = _cached_pfs[apath]
+ obj = _cached_datasets[apath]
return obj
def __init__(self, filename, dataset_type=None, file_style=None):
@@ -186,11 +186,11 @@
self.set_units()
self._setup_coordinate_handler()
- # Because we need an instantiated class to check the pf's existence in
+ # Because we need an instantiated class to check the ds's existence in
# the cache, we move that check to here from __new__. This avoids
# double-instantiation.
try:
- _pf_store.check_pf(self)
+ _ds_store.check_ds(self)
except NoParameterShelf:
pass
self.print_key_parameters()
@@ -215,7 +215,7 @@
def __reduce__(self):
args = (self._hash(),)
- return (_reconstruct_pf, args)
+ return (_reconstruct_ds, args)
def __repr__(self):
return self.basename
@@ -436,7 +436,7 @@
def _setup_particle_types(self, ptypes = None):
df = []
- if ptypes is None: ptypes = self.pf.particle_types_raw
+ if ptypes is None: ptypes = self.ds.particle_types_raw
for ptype in set(ptypes):
df += self._setup_particle_type(ptype)
return df
@@ -498,7 +498,7 @@
continue
cname = cls.__name__
if cname.endswith("Base"): cname = cname[:-4]
- self._add_object_class(name, cname, cls, {'pf':self})
+ self._add_object_class(name, cname, cls, {'ds':self})
if self.refine_by != 2 and hasattr(self, 'proj') and \
hasattr(self, 'overlap_proj'):
mylog.warning("Refine by something other than two: reverting to"
@@ -667,14 +667,14 @@
deps, _ = self.field_info.check_derived_fields([name])
self.field_dependencies.update(deps)
-def _reconstruct_pf(*args, **kwargs):
- pfs = ParameterFileStore()
- pf = pfs.get_pf_hash(*args)
- return pf
+def _reconstruct_ds(*args, **kwargs):
+ datasets = ParameterFileStore()
+ ds = datasets.get_ds_hash(*args)
+ return ds
class ParticleFile(object):
- def __init__(self, pf, io, filename, file_id):
- self.pf = pf
+ def __init__(self, ds, io, filename, file_id):
+ self.ds = ds
self.io = weakref.proxy(io)
self.filename = filename
self.file_id = file_id
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_boolean_regions.py
--- a/yt/data_objects/tests/test_boolean_regions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_boolean_regions.py Sun Jun 15 19:50:51 2014 -0700
@@ -7,9 +7,9 @@
from yt.config import ytcfg
ytcfg["yt","__withintesting"] = "True"
def _ID(field, data):
- width = data.pf.domain_right_edge - data.pf.domain_left_edge
+ width = data.ds.domain_right_edge - data.ds.domain_left_edge
min_dx = YTArray(1.0/8192, input_units='code_length',
- registry=data.pf.unit_registry)
+ registry=data.ds.unit_registry)
delta = width / min_dx
x = data['x'] - min_dx / 2.
y = data['y'] - min_dx / 2.
@@ -32,10 +32,10 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- sp1 = pf.sphere([0.25, 0.25, 0.25], 0.15)
- sp2 = pf.sphere([0.75, 0.75, 0.75], 0.15)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ sp1 = ds.sphere([0.25, 0.25, 0.25], 0.15)
+ sp2 = ds.sphere([0.75, 0.75, 0.75], 0.15)
# Store the original indices
i1 = sp1['ID']
i1.sort()
@@ -44,9 +44,9 @@
ii = np.concatenate((i1, i2))
ii.sort()
# Make some booleans
- bo1 = pf.boolean([sp1, "AND", sp2]) # empty
- bo2 = pf.boolean([sp1, "NOT", sp2]) # only sp1
- bo3 = pf.boolean([sp1, "OR", sp2]) # combination
+ bo1 = ds.boolean([sp1, "AND", sp2]) # empty
+ bo2 = ds.boolean([sp1, "NOT", sp2]) # only sp1
+ bo3 = ds.boolean([sp1, "OR", sp2]) # combination
# This makes sure the original containers didn't change.
new_i1 = sp1['ID']
new_i1.sort()
@@ -72,17 +72,17 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- sp1 = pf.sphere([0.45, 0.45, 0.45], 0.15)
- sp2 = pf.sphere([0.55, 0.55, 0.55], 0.15)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ sp1 = ds.sphere([0.45, 0.45, 0.45], 0.15)
+ sp2 = ds.sphere([0.55, 0.55, 0.55], 0.15)
# Get indices of both.
i1 = sp1['ID']
i2 = sp2['ID']
# Make some booleans
- bo1 = pf.boolean([sp1, "AND", sp2]) # overlap (a lens)
- bo2 = pf.boolean([sp1, "NOT", sp2]) # sp1 - sp2 (sphere with bite)
- bo3 = pf.boolean([sp1, "OR", sp2]) # combination (H2)
+ bo1 = ds.boolean([sp1, "AND", sp2]) # overlap (a lens)
+ bo2 = ds.boolean([sp1, "NOT", sp2]) # sp1 - sp2 (sphere with bite)
+ bo3 = ds.boolean([sp1, "OR", sp2]) # combination (H2)
# Now make sure the indices also behave as we expect.
lens = np.intersect1d(i1, i2)
apple = np.setdiff1d(i1, i2)
@@ -106,10 +106,10 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- re1 = pf.region([0.25]*3, [0.2]*3, [0.3]*3)
- re2 = pf.region([0.65]*3, [0.6]*3, [0.7]*3)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ re1 = ds.region([0.25]*3, [0.2]*3, [0.3]*3)
+ re2 = ds.region([0.65]*3, [0.6]*3, [0.7]*3)
# Store the original indices
i1 = re1['ID']
i1.sort()
@@ -118,9 +118,9 @@
ii = np.concatenate((i1, i2))
ii.sort()
# Make some booleans
- bo1 = pf.boolean([re1, "AND", re2]) # empty
- bo2 = pf.boolean([re1, "NOT", re2]) # only re1
- bo3 = pf.boolean([re1, "OR", re2]) # combination
+ bo1 = ds.boolean([re1, "AND", re2]) # empty
+ bo2 = ds.boolean([re1, "NOT", re2]) # only re1
+ bo3 = ds.boolean([re1, "OR", re2]) # combination
# This makes sure the original containers didn't change.
new_i1 = re1['ID']
new_i1.sort()
@@ -146,17 +146,17 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- re1 = pf.region([0.55]*3, [0.5]*3, [0.6]*3)
- re2 = pf.region([0.6]*3, [0.55]*3, [0.65]*3)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ re1 = ds.region([0.55]*3, [0.5]*3, [0.6]*3)
+ re2 = ds.region([0.6]*3, [0.55]*3, [0.65]*3)
# Get indices of both.
i1 = re1['ID']
i2 = re2['ID']
# Make some booleans
- bo1 = pf.boolean([re1, "AND", re2]) # overlap (small cube)
- bo2 = pf.boolean([re1, "NOT", re2]) # sp1 - sp2 (large cube with bite)
- bo3 = pf.boolean([re1, "OR", re2]) # combination (merged large cubes)
+ bo1 = ds.boolean([re1, "AND", re2]) # overlap (small cube)
+ bo2 = ds.boolean([re1, "NOT", re2]) # sp1 - sp2 (large cube with bite)
+ bo3 = ds.boolean([re1, "OR", re2]) # combination (merged large cubes)
# Now make sure the indices also behave as we expect.
cube = np.intersect1d(i1, i2)
bite_cube = np.setdiff1d(i1, i2)
@@ -180,10 +180,10 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- cyl1 = pf.disk([0.25]*3, [1, 0, 0], 0.1, 0.1)
- cyl2 = pf.disk([0.75]*3, [1, 0, 0], 0.1, 0.1)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ cyl1 = ds.disk([0.25]*3, [1, 0, 0], 0.1, 0.1)
+ cyl2 = ds.disk([0.75]*3, [1, 0, 0], 0.1, 0.1)
# Store the original indices
i1 = cyl1['ID']
i1.sort()
@@ -192,9 +192,9 @@
ii = np.concatenate((i1, i2))
ii.sort()
# Make some booleans
- bo1 = pf.boolean([cyl1, "AND", cyl2]) # empty
- bo2 = pf.boolean([cyl1, "NOT", cyl2]) # only cyl1
- bo3 = pf.boolean([cyl1, "OR", cyl2]) # combination
+ bo1 = ds.boolean([cyl1, "AND", cyl2]) # empty
+ bo2 = ds.boolean([cyl1, "NOT", cyl2]) # only cyl1
+ bo3 = ds.boolean([cyl1, "OR", cyl2]) # combination
# This makes sure the original containers didn't change.
new_i1 = cyl1['ID']
new_i1.sort()
@@ -220,17 +220,17 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- cyl1 = pf.disk([0.45]*3, [1, 0, 0], 0.2, 0.2)
- cyl2 = pf.disk([0.55]*3, [1, 0, 0], 0.2, 0.2)
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ cyl1 = ds.disk([0.45]*3, [1, 0, 0], 0.2, 0.2)
+ cyl2 = ds.disk([0.55]*3, [1, 0, 0], 0.2, 0.2)
# Get indices of both.
i1 = cyl1['ID']
i2 = cyl2['ID']
# Make some booleans
- bo1 = pf.boolean([cyl1, "AND", cyl2]) # overlap (vertically extened lens)
- bo2 = pf.boolean([cyl1, "NOT", cyl2]) # sp1 - sp2 (disk minus a bite)
- bo3 = pf.boolean([cyl1, "OR", cyl2]) # combination (merged disks)
+ bo1 = ds.boolean([cyl1, "AND", cyl2]) # overlap (vertically extened lens)
+ bo2 = ds.boolean([cyl1, "NOT", cyl2]) # sp1 - sp2 (disk minus a bite)
+ bo3 = ds.boolean([cyl1, "OR", cyl2]) # combination (merged disks)
# Now make sure the indices also behave as we expect.
vlens = np.intersect1d(i1, i2)
bite_disk = np.setdiff1d(i1, i2)
@@ -254,11 +254,11 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- ell1 = pf.ellipsoid([0.25]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ ell1 = ds.ellipsoid([0.25]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
np.array([0.1]*3))
- ell2 = pf.ellipsoid([0.75]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
+ ell2 = ds.ellipsoid([0.75]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
np.array([0.1]*3))
# Store the original indices
i1 = ell1['ID']
@@ -268,9 +268,9 @@
ii = np.concatenate((i1, i2))
ii.sort()
# Make some booleans
- bo1 = pf.boolean([ell1, "AND", ell2]) # empty
- bo2 = pf.boolean([ell1, "NOT", ell2]) # only cyl1
- bo3 = pf.boolean([ell1, "OR", ell2]) # combination
+ bo1 = ds.boolean([ell1, "AND", ell2]) # empty
+ bo2 = ds.boolean([ell1, "NOT", ell2]) # only cyl1
+ bo3 = ds.boolean([ell1, "OR", ell2]) # combination
# This makes sure the original containers didn't change.
new_i1 = ell1['ID']
new_i1.sort()
@@ -296,19 +296,19 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- ell1 = pf.ellipsoid([0.45]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ ell1 = ds.ellipsoid([0.45]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
np.array([0.1]*3))
- ell2 = pf.ellipsoid([0.55]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
+ ell2 = ds.ellipsoid([0.55]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
np.array([0.1]*3))
# Get indices of both.
i1 = ell1['ID']
i2 = ell2['ID']
# Make some booleans
- bo1 = pf.boolean([ell1, "AND", ell2]) # overlap
- bo2 = pf.boolean([ell1, "NOT", ell2]) # ell1 - ell2
- bo3 = pf.boolean([ell1, "OR", ell2]) # combination
+ bo1 = ds.boolean([ell1, "AND", ell2]) # overlap
+ bo2 = ds.boolean([ell1, "NOT", ell2]) # ell1 - ell2
+ bo3 = ds.boolean([ell1, "OR", ell2]) # combination
# Now make sure the indices also behave as we expect.
overlap = np.intersect1d(i1, i2)
diff = np.setdiff1d(i1, i2)
@@ -330,22 +330,22 @@
"""
return
for n in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=n)
- pf.h
- re = pf.region([0.5]*3, [0.0]*3, [1]*3) # whole thing
- sp = pf.sphere([0.95]*3, 0.3) # wraps around
- cyl = pf.disk([0.05]*3, [1,1,1], 0.1, 0.4) # wraps around
+ ds = fake_random_ds(64, nprocs=n)
+ ds.index
+ re = ds.region([0.5]*3, [0.0]*3, [1]*3) # whole thing
+ sp = ds.sphere([0.95]*3, 0.3) # wraps around
+ cyl = ds.disk([0.05]*3, [1,1,1], 0.1, 0.4) # wraps around
# Get original indices
rei = re['ID']
spi = sp['ID']
cyli = cyl['ID']
# Make some booleans
# whole box minux spherical bites at corners
- bo1 = pf.boolean([re, "NOT", sp])
+ bo1 = ds.boolean([re, "NOT", sp])
# sphere plus cylinder
- bo2 = pf.boolean([sp, "OR", cyl])
+ bo2 = ds.boolean([sp, "OR", cyl])
# a jumble, the region minus the sp+cyl
- bo3 = pf.boolean([re, "NOT", "(", sp, "OR", cyl, ")"])
+ bo3 = ds.boolean([re, "NOT", "(", sp, "OR", cyl, ")"])
# Now make sure the indices also behave as we expect.
expect = np.setdiff1d(rei, spi)
ii = bo1['ID']
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_chunking.py
--- a/yt/data_objects/tests/test_chunking.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_chunking.py Sun Jun 15 19:50:51 2014 -0700
@@ -14,11 +14,11 @@
def test_chunking():
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs = nprocs)
- c = (pf.domain_right_edge + pf.domain_left_edge)/2.0
- c += pf.arr(0.5/pf.domain_dimensions, "code_length")
+ ds = fake_random_ds(64, nprocs = nprocs)
+ c = (ds.domain_right_edge + ds.domain_left_edge)/2.0
+ c += ds.arr(0.5/ds.domain_dimensions, "code_length")
for dobj in _get_dobjs(c):
- obj = getattr(pf.h, dobj[0])(*dobj[1])
+ obj = getattr(ds, dobj[0])(*dobj[1])
coords = {'f':{}, 'i':{}}
for t in ["io", "all", "spatial"]:
coords['i'][t] = []
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_covering_grid.py
--- a/yt/data_objects/tests/test_covering_grid.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_covering_grid.py Sun Jun 15 19:50:51 2014 -0700
@@ -10,16 +10,16 @@
# We decompose in different ways
for level in [0, 1, 2]:
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs)
- dn = pf.refine_by**level
- cg = pf.covering_grid(level, [0.0, 0.0, 0.0],
- dn * pf.domain_dimensions)
+ ds = fake_random_ds(16, nprocs = nprocs)
+ dn = ds.refine_by**level
+ cg = ds.covering_grid(level, [0.0, 0.0, 0.0],
+ dn * ds.domain_dimensions)
# Test coordinate generation
yield assert_equal, np.unique(cg["dx"]).size, 1
xmi = cg["x"].min()
xma = cg["x"].max()
dx = cg["dx"].flat[0:1]
- edges = pf.arr([[0,1],[0,1],[0,1]], 'code_length')
+ edges = ds.arr([[0,1],[0,1],[0,1]], 'code_length')
yield assert_equal, xmi, edges[0,0] + dx/2.0
yield assert_equal, xmi, cg["x"][0,0,0]
yield assert_equal, xmi, cg["x"][0,1,1]
@@ -50,8 +50,8 @@
yield assert_equal, cg["ones"].max(), 1.0
yield assert_equal, cg["ones"].min(), 1.0
yield assert_equal, cg["grid_level"], 0
- yield assert_equal, cg["cell_volume"].sum(), pf.domain_width.prod()
- for g in pf.index.grids:
+ yield assert_equal, cg["cell_volume"].sum(), ds.domain_width.prod()
+ for g in ds.index.grids:
di = g.get_global_startindex()
dd = g.ActiveDimensions
for i in range(dn):
@@ -64,14 +64,14 @@
# We decompose in different ways
for level in [0, 1, 2]:
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs)
- dn = pf.refine_by**level
- cg = pf.smoothed_covering_grid(level, [0.0, 0.0, 0.0],
- dn * pf.domain_dimensions)
+ ds = fake_random_ds(16, nprocs = nprocs)
+ dn = ds.refine_by**level
+ cg = ds.smoothed_covering_grid(level, [0.0, 0.0, 0.0],
+ dn * ds.domain_dimensions)
yield assert_equal, cg["ones"].max(), 1.0
yield assert_equal, cg["ones"].min(), 1.0
- yield assert_equal, cg["cell_volume"].sum(), pf.domain_width.prod()
- for g in pf.index.grids:
+ yield assert_equal, cg["cell_volume"].sum(), ds.domain_width.prod()
+ for g in ds.index.grids:
if level != g.Level: continue
di = g.get_global_startindex()
dd = g.ActiveDimensions
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_cutting_plane.py
--- a/yt/data_objects/tests/test_cutting_plane.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_cutting_plane.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,4 +1,4 @@
-from yt.testing import assert_equal, fake_random_pf
+from yt.testing import assert_equal, fake_random_ds
from yt.units.unit_object import Unit
import os
import tempfile
@@ -22,10 +22,10 @@
for nprocs in [8, 1]:
# We want to test both 1 proc and 8 procs, to make sure that
# parallelism isn't broken
- pf = fake_random_pf(64, nprocs=nprocs)
+ ds = fake_random_ds(64, nprocs=nprocs)
center = [0.5, 0.5, 0.5]
normal = [1, 1, 1]
- cut = pf.cutting(normal, center)
+ cut = ds.cutting(normal, center)
yield assert_equal, cut["ones"].sum(), cut["ones"].size
yield assert_equal, cut["ones"].min(), 1.0
yield assert_equal, cut["ones"].max(), 1.0
@@ -36,7 +36,7 @@
fns.append(tmpname)
frb = cut.to_frb((1.0, 'unitary'), 64)
for cut_field in ['ones', 'density']:
- fi = pf._get_field_info("unknown", cut_field)
+ fi = ds._get_field_info("unknown", cut_field)
yield assert_equal, frb[cut_field].info['data_source'], \
cut.__str__()
yield assert_equal, frb[cut_field].info['axis'], \
@@ -50,7 +50,7 @@
yield assert_equal, frb[cut_field].info['ylim'], \
frb.bounds[2:]
yield assert_equal, frb[cut_field].info['length_to_cm'], \
- pf.length_unit.in_cgs()
+ ds.length_unit.in_cgs()
yield assert_equal, frb[cut_field].info['center'], \
cut.center
teardown_func(fns)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_data_collection.py
--- a/yt/data_objects/tests/test_data_collection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_data_collection.py Sun Jun 15 19:50:51 2014 -0700
@@ -7,16 +7,16 @@
def test_data_collection():
# We decompose in different ways
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs)
- coll = pf.data_collection(pf.domain_center, pf.index.grids)
+ ds = fake_random_ds(16, nprocs = nprocs)
+ coll = ds.data_collection(ds.domain_center, ds.index.grids)
crho = coll["density"].sum(dtype="float64").to_ndarray()
- grho = np.sum([g["density"].sum(dtype="float64") for g in pf.index.grids],
+ grho = np.sum([g["density"].sum(dtype="float64") for g in ds.index.grids],
dtype="float64")
yield assert_rel_equal, np.array([crho]), np.array([grho]), 12
- yield assert_equal, coll.size, pf.domain_dimensions.prod()
- for gi in range(pf.index.num_grids):
- grids = pf.index.grids[:gi+1]
- coll = pf.data_collection(pf.domain_center, grids)
+ yield assert_equal, coll.size, ds.domain_dimensions.prod()
+ for gi in range(ds.index.num_grids):
+ grids = ds.index.grids[:gi+1]
+ coll = ds.data_collection(ds.domain_center, grids)
crho = coll["density"].sum(dtype="float64")
grho = np.sum([g["density"].sum(dtype="float64") for g in grids],
dtype="float64")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_derived_quantities.py
--- a/yt/data_objects/tests/test_derived_quantities.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_derived_quantities.py Sun Jun 15 19:50:51 2014 -0700
@@ -7,17 +7,17 @@
def test_extrema():
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs, fields = ("density",
+ ds = fake_random_ds(16, nprocs = nprocs, fields = ("density",
"velocity_x", "velocity_y", "velocity_z"))
- sp = pf.sphere("c", (0.25, 'unitary'))
+ sp = ds.sphere("c", (0.25, 'unitary'))
mi, ma = sp.quantities["Extrema"]("density")
yield assert_equal, mi, np.nanmin(sp["density"])
yield assert_equal, ma, np.nanmax(sp["density"])
- dd = pf.h.all_data()
+ dd = ds.all_data()
mi, ma = dd.quantities["Extrema"]("density")
yield assert_equal, mi, np.nanmin(dd["density"])
yield assert_equal, ma, np.nanmax(dd["density"])
- sp = pf.sphere("max", (0.25, 'unitary'))
+ sp = ds.sphere("max", (0.25, 'unitary'))
yield assert_equal, np.any(np.isnan(sp["radial_velocity"])), False
mi, ma = dd.quantities["Extrema"]("radial_velocity")
yield assert_equal, mi, np.nanmin(dd["radial_velocity"])
@@ -25,8 +25,8 @@
def test_average():
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs, fields = ("density",))
- ad = pf.h.all_data()
+ ds = fake_random_ds(16, nprocs = nprocs, fields = ("density",))
+ ad = ds.all_data()
my_mean = ad.quantities["WeightedAverageQuantity"]("density", "ones")
yield assert_rel_equal, my_mean, ad["density"].mean(), 12
@@ -37,8 +37,8 @@
def test_variance():
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(16, nprocs = nprocs, fields = ("density", ))
- ad = pf.h.all_data()
+ ds = fake_random_ds(16, nprocs = nprocs, fields = ("density", ))
+ ad = ds.all_data()
my_std, my_mean = ad.quantities["WeightedVariance"]("density", "ones")
yield assert_rel_equal, my_mean, ad["density"].mean(), 12
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_ellipsoid.py
--- a/yt/data_objects/tests/test_ellipsoid.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_ellipsoid.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,9 +20,9 @@
]
np.random.seed(int(0x4d3d3d3))
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs = nprocs)
- DW = pf.domain_right_edge - pf.domain_left_edge
- min_dx = 2.0/pf.domain_dimensions
+ ds = fake_random_ds(64, nprocs = nprocs)
+ DW = ds.domain_right_edge - ds.domain_left_edge
+ min_dx = 2.0/ds.domain_dimensions
ABC = np.random.random((3, 12)) * 0.1
e0s = np.random.random((3, 12))
tilts = np.random.random(12)
@@ -35,7 +35,7 @@
C = max(C, min_dx[2])
e0 = e0s[:,i]
tilt = tilts[i]
- ell = pf.ellipsoid(c, A, B, C, e0, tilt)
+ ell = ds.ellipsoid(c, A, B, C, e0, tilt)
yield assert_array_less, ell["radius"], A
p = np.array([ell[ax] for ax in 'xyz'])
dot_evec = [np.zeros_like(ell["radius"]) for i in range(3)]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_extract_regions.py
--- a/yt/data_objects/tests/test_extract_regions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_extract_regions.py Sun Jun 15 19:50:51 2014 -0700
@@ -7,10 +7,10 @@
def test_cut_region():
# We decompose in different ways
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs = nprocs,
+ ds = fake_random_ds(64, nprocs = nprocs,
fields = ("density", "temperature", "velocity_x"))
# We'll test two objects
- dd = pf.h.all_data()
+ dd = ds.all_data()
r = dd.cut_region( [ "obj['temperature'] > 0.5",
"obj['density'] < 0.75",
"obj['velocity_x'] > 0.25" ])
@@ -28,15 +28,15 @@
yield assert_equal, np.all(r2["temperature"] < 0.75), True
# Now we can test some projections
- dd = pf.h.all_data()
+ dd = ds.all_data()
cr = dd.cut_region(["obj['ones'] > 0"])
for weight in [None, "density"]:
- p1 = pf.proj("density", 0, data_source=dd, weight_field=weight)
- p2 = pf.proj("density", 0, data_source=cr, weight_field=weight)
+ p1 = ds.proj("density", 0, data_source=dd, weight_field=weight)
+ p2 = ds.proj("density", 0, data_source=cr, weight_field=weight)
for f in p1.field_data:
yield assert_almost_equal, p1[f], p2[f]
cr = dd.cut_region(["obj['density'] > 0.25"])
- p2 = pf.proj("density", 2, data_source=cr)
+ p2 = ds.proj("density", 2, data_source=cr)
yield assert_equal, p2["density"].max() > 0.25, True
- p2 = pf.proj("density", 2, data_source=cr, weight_field = "density")
+ p2 = ds.proj("density", 2, data_source=cr, weight_field = "density")
yield assert_equal, p2["density"].max() > 0.25, True
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_fluxes.py
--- a/yt/data_objects/tests/test_fluxes.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_fluxes.py Sun Jun 15 19:50:51 2014 -0700
@@ -5,17 +5,17 @@
ytcfg["yt","__withintesting"] = "True"
def test_flux_calculation():
- pf = fake_random_pf(64, nprocs = 4)
- dd = pf.h.all_data()
- surf = pf.surface(dd, "x", 0.51)
+ ds = fake_random_ds(64, nprocs = 4)
+ dd = ds.all_data()
+ surf = ds.surface(dd, "x", 0.51)
yield assert_equal, surf["x"], 0.51
flux = surf.calculate_flux("ones", "zeros", "zeros", "ones")
yield assert_almost_equal, flux, 1.0, 12
def test_sampling():
- pf = fake_random_pf(64, nprocs = 4)
- dd = pf.h.all_data()
+ ds = fake_random_ds(64, nprocs = 4)
+ dd = ds.all_data()
for i, ax in enumerate('xyz'):
- surf = pf.surface(dd, ax, 0.51)
+ surf = ds.surface(dd, ax, 0.51)
surf.get_data(ax, "vertex")
yield assert_equal, surf.vertex_samples[ax], surf.vertices[i,:]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_ortho_rays.py
--- a/yt/data_objects/tests/test_ortho_rays.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_ortho_rays.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,20 +1,20 @@
from yt.testing import *
def test_ortho_ray():
- pf = fake_random_pf(64, nprocs=8)
- dx = (pf.domain_right_edge - pf.domain_left_edge) / \
- pf.domain_dimensions
+ ds = fake_random_ds(64, nprocs=8)
+ dx = (ds.domain_right_edge - ds.domain_left_edge) / \
+ ds.domain_dimensions
axes = ['x', 'y', 'z']
for ax, an in enumerate(axes):
- ocoord = pf.arr(np.random.random(2), 'code_length')
+ ocoord = ds.arr(np.random.random(2), 'code_length')
- my_oray = pf.ortho_ray(ax, ocoord)
+ my_oray = ds.ortho_ray(ax, ocoord)
- my_axes = pf.coordinates.x_axis[ax], pf.coordinates.y_axis[ax]
+ my_axes = ds.coordinates.x_axis[ax], ds.coordinates.y_axis[ax]
# find the cells intersected by the ortho ray
- my_all = pf.h.all_data()
+ my_all = ds.all_data()
my_cells = (np.abs(my_all[axes[my_axes[0]]] - ocoord[0]) <=
0.5 * dx[my_axes[0]]) & \
(np.abs(my_all[axes[my_axes[1]]] - ocoord[1]) <=
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_pickle.py
--- a/yt/data_objects/tests/test_pickle.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_pickle.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
import os
import tempfile
from yt.testing \
- import fake_random_pf, assert_equal
+ import fake_random_ds, assert_equal
def setup():
@@ -28,13 +28,13 @@
def test_save_load_pickle():
"""Main test for loading pickled objects"""
return # Until boolean regions are implemented we can't test this
- test_pf = fake_random_pf(64)
+ test_ds = fake_random_ds(64)
# create extracted region from boolean (fairly complex object)
- center = (test_pf.domain_left_edge + test_pf.domain_right_edge) / 2
- sp_outer = test_pf.sphere(center, test_pf.domain_width[0])
- sp_inner = test_pf.sphere(center, test_pf.domain_width[0] / 10.0)
- sp_boolean = test_pf.boolean([sp_outer, "NOT", sp_inner])
+ center = (test_ds.domain_left_edge + test_ds.domain_right_edge) / 2
+ sp_outer = test_ds.sphere(center, test_ds.domain_width[0])
+ sp_inner = test_ds.sphere(center, test_ds.domain_width[0] / 10.0)
+ sp_boolean = test_ds.boolean([sp_outer, "NOT", sp_inner])
minv, maxv = sp_boolean.quantities["Extrema"]("density")[0]
contour_threshold = min(minv * 10.0, 0.9 * maxv)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_profiles.py
--- a/yt/data_objects/tests/test_profiles.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_profiles.py Sun Jun 15 19:50:51 2014 -0700
@@ -8,9 +8,9 @@
def test_binned_profiles():
return
- pf = fake_random_pf(64, nprocs = 8, fields = _fields, units = _units)
- nv = pf.domain_dimensions.prod()
- dd = pf.h.all_data()
+ ds = fake_random_ds(64, nprocs = 8, fields = _fields, units = _units)
+ nv = ds.domain_dimensions.prod()
+ dd = ds.all_data()
(rmi, rma), (tmi, tma), (dmi, dma) = dd.quantities["Extrema"](
["density", "temperature", "dinosaurs"])
rt, tt, dt = dd.quantities["TotalQuantity"](
@@ -75,9 +75,9 @@
yield assert_equal, p3d["ones"][:-1,:-1,:-1], np.ones((nb,nb,nb))
def test_profiles():
- pf = fake_random_pf(64, nprocs = 8, fields = _fields, units = _units)
- nv = pf.domain_dimensions.prod()
- dd = pf.h.all_data()
+ ds = fake_random_ds(64, nprocs = 8, fields = _fields, units = _units)
+ nv = ds.domain_dimensions.prod()
+ dd = ds.all_data()
(rmi, rma), (tmi, tma), (dmi, dma) = dd.quantities["Extrema"](
["density", "temperature", "dinosaurs"])
rt, tt, dt = dd.quantities["TotalQuantity"](
@@ -156,8 +156,8 @@
def test_particle_profiles():
for nproc in [1, 2, 4, 8]:
- pf = fake_random_pf(32, nprocs=nproc, particles = 32**3)
- dd = pf.h.all_data()
+ ds = fake_random_ds(32, nprocs=nproc, particles = 32**3)
+ dd = ds.all_data()
p1d = Profile1D(dd, "particle_position_x", 128,
0.0, 1.0, False, weight_field = None)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_projection.py
--- a/yt/data_objects/tests/test_projection.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_projection.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,6 +1,6 @@
import numpy as np
from yt.testing import \
- fake_random_pf, assert_equal, assert_rel_equal
+ fake_random_ds, assert_equal, assert_rel_equal
from yt.units.unit_object import Unit
import os
import tempfile
@@ -24,23 +24,23 @@
for nprocs in [8, 1]:
# We want to test both 1 proc and 8 procs, to make sure that
# parallelism isn't broken
- pf = fake_random_pf(64, nprocs=nprocs)
- dims = pf.domain_dimensions
- xn, yn, zn = pf.domain_dimensions
- xi, yi, zi = pf.domain_left_edge.to_ndarray() + \
- 1.0 / (pf.domain_dimensions * 2)
- xf, yf, zf = pf.domain_right_edge.to_ndarray() - \
- 1.0 / (pf.domain_dimensions * 2)
- dd = pf.h.all_data()
+ ds = fake_random_ds(64, nprocs=nprocs)
+ dims = ds.domain_dimensions
+ xn, yn, zn = ds.domain_dimensions
+ xi, yi, zi = ds.domain_left_edge.to_ndarray() + \
+ 1.0 / (ds.domain_dimensions * 2)
+ xf, yf, zf = ds.domain_right_edge.to_ndarray() - \
+ 1.0 / (ds.domain_dimensions * 2)
+ dd = ds.all_data()
rho_tot = dd.quantities["TotalQuantity"]("density")
coords = np.mgrid[xi:xf:xn*1j, yi:yf:yn*1j, zi:zf:zn*1j]
uc = [np.unique(c) for c in coords]
# Some simple projection tests with single grids
for ax, an in enumerate("xyz"):
- xax = pf.coordinates.x_axis[ax]
- yax = pf.coordinates.y_axis[ax]
+ xax = ds.coordinates.x_axis[ax]
+ yax = ds.coordinates.y_axis[ax]
for wf in ["density", None]:
- proj = pf.proj(["ones", "density"], ax, weight_field=wf)
+ proj = ds.proj(["ones", "density"], ax, weight_field=wf)
yield assert_equal, proj["ones"].sum(), proj["ones"].size
yield assert_equal, proj["ones"].min(), 1.0
yield assert_equal, proj["ones"].max(), 1.0
@@ -55,7 +55,7 @@
fns.append(tmpname)
frb = proj.to_frb((1.0, 'unitary'), 64)
for proj_field in ['ones', 'density']:
- fi = pf._get_field_info(proj_field)
+ fi = ds._get_field_info(proj_field)
yield assert_equal, frb[proj_field].info['data_source'], \
proj.__str__()
yield assert_equal, frb[proj_field].info['axis'], \
@@ -65,7 +65,7 @@
field_unit = Unit(fi.units)
if wf is not None:
yield assert_equal, frb[proj_field].units, \
- Unit(field_unit, registry=pf.unit_registry)
+ Unit(field_unit, registry=ds.unit_registry)
else:
if frb[proj_field].units.is_code_unit:
proj_unit = "code_length"
@@ -75,7 +75,7 @@
proj_unit = \
"({0}) * {1}".format(field_unit, proj_unit)
yield assert_equal, frb[proj_field].units, \
- Unit(proj_unit, registry=pf.unit_registry)
+ Unit(proj_unit, registry=ds.unit_registry)
yield assert_equal, frb[proj_field].info['xlim'], \
frb.bounds[:2]
yield assert_equal, frb[proj_field].info['ylim'], \
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_rays.py
--- a/yt/data_objects/tests/test_rays.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_rays.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,9 +2,9 @@
def test_ray():
for nproc in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs=nproc)
- dx = (pf.domain_right_edge - pf.domain_left_edge) / \
- pf.domain_dimensions
+ ds = fake_random_ds(64, nprocs=nproc)
+ dx = (ds.domain_right_edge - ds.domain_left_edge) / \
+ ds.domain_dimensions
# Three we choose, to get varying vectors, and ten random
pp1 = np.random.random((3, 13))
pp2 = np.random.random((3, 13))
@@ -14,17 +14,17 @@
pp2[:,1] = [0.8, 0.1, 0.4]
pp1[:,2] = [0.9, 0.2, 0.9]
pp2[:,2] = [0.8, 0.1, 0.4]
- unitary = pf.arr(1.0, '')
+ unitary = ds.arr(1.0, '')
for i in range(pp1.shape[1]):
- p1 = pf.arr(pp1[:,i] + 1e-8 * np.random.random(3), 'code_length')
- p2 = pf.arr(pp2[:,i] + 1e-8 * np.random.random(3), 'code_length')
+ p1 = ds.arr(pp1[:,i] + 1e-8 * np.random.random(3), 'code_length')
+ p2 = ds.arr(pp2[:,i] + 1e-8 * np.random.random(3), 'code_length')
- my_ray = pf.ray(p1, p2)
+ my_ray = ds.ray(p1, p2)
yield assert_rel_equal, my_ray['dts'].sum(), unitary, 14
ray_cells = my_ray['dts'] > 0
# find cells intersected by the ray
- my_all = pf.h.all_data()
+ my_all = ds.all_data()
dt = np.abs(dx / (p2 - p1))
tin = uconcatenate([[(my_all['x'] - p1[0]) / (p2 - p1)[0] - 0.5 * dt[0]],
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_slice.py
--- a/yt/data_objects/tests/test_slice.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_slice.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
import numpy as np
import tempfile
from yt.testing import \
- fake_random_pf, assert_equal
+ fake_random_ds, assert_equal
from yt.units.unit_object import Unit
@@ -37,21 +37,21 @@
for nprocs in [8, 1]:
# We want to test both 1 proc and 8 procs, to make sure that
# parallelism isn't broken
- pf = fake_random_pf(64, nprocs=nprocs)
- dims = pf.domain_dimensions
- xn, yn, zn = pf.domain_dimensions
- dx = pf.arr(1.0 / (pf.domain_dimensions * 2), 'code_length')
- xi, yi, zi = pf.domain_left_edge + dx
- xf, yf, zf = pf.domain_right_edge - dx
+ ds = fake_random_ds(64, nprocs=nprocs)
+ dims = ds.domain_dimensions
+ xn, yn, zn = ds.domain_dimensions
+ dx = ds.arr(1.0 / (ds.domain_dimensions * 2), 'code_length')
+ xi, yi, zi = ds.domain_left_edge + dx
+ xf, yf, zf = ds.domain_right_edge - dx
coords = np.mgrid[xi:xf:xn * 1j, yi:yf:yn * 1j, zi:zf:zn * 1j]
uc = [np.unique(c) for c in coords]
slc_pos = 0.5
# Some simple slice tests with single grids
for ax, an in enumerate("xyz"):
- xax = pf.coordinates.x_axis[ax]
- yax = pf.coordinates.y_axis[ax]
+ xax = ds.coordinates.x_axis[ax]
+ yax = ds.coordinates.y_axis[ax]
for wf in ["density", None]:
- slc = pf.slice(ax, slc_pos)
+ slc = ds.slice(ax, slc_pos)
yield assert_equal, slc["ones"].sum(), slc["ones"].size
yield assert_equal, slc["ones"].min(), 1.0
yield assert_equal, slc["ones"].max(), 1.0
@@ -66,7 +66,7 @@
fns.append(tmpname)
frb = slc.to_frb((1.0, 'unitary'), 64)
for slc_field in ['ones', 'density']:
- fi = pf._get_field_info(slc_field)
+ fi = ds._get_field_info(slc_field)
yield assert_equal, frb[slc_field].info['data_source'], \
slc.__str__()
yield assert_equal, frb[slc_field].info['axis'], \
@@ -89,15 +89,15 @@
def test_slice_over_edges():
- pf = fake_random_pf(64, nprocs=8, fields=["density"], negative=[False])
- slc = pf.slice(0, 0.0)
+ ds = fake_random_ds(64, nprocs=8, fields=["density"], negative=[False])
+ slc = ds.slice(0, 0.0)
slc["density"]
- slc = pf.slice(1, 0.5)
+ slc = ds.slice(1, 0.5)
slc["density"]
def test_slice_over_outer_boundary():
- pf = fake_random_pf(64, nprocs=8, fields=["density"], negative=[False])
- slc = pf.slice(2, 1.0)
+ ds = fake_random_ds(64, nprocs=8, fields=["density"], negative=[False])
+ slc = ds.slice(2, 1.0)
slc["density"]
yield assert_equal, slc["density"].size, 0
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_spheres.py
--- a/yt/data_objects/tests/test_spheres.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_spheres.py Sun Jun 15 19:50:51 2014 -0700
@@ -6,5 +6,5 @@
ytcfg["yt","__withintesting"] = "True"
def test_domain_sphere():
- pf = fake_random_pf(16, fields = ("density"))
- sp = pf.sphere(pf.domain_center, pf.domain_width[0])
+ ds = fake_random_ds(16, fields = ("density"))
+ sp = ds.sphere(ds.domain_center, ds.domain_width[0])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/tests/test_streamlines.py
--- a/yt/data_objects/tests/test_streamlines.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/tests/test_streamlines.py Sun Jun 15 19:50:51 2014 -0700
@@ -14,8 +14,8 @@
cs = np.array([a.ravel() for a in cs]).T
length = (1.0/128) * 16 # 16 half-widths of a cell
for nprocs in [1, 2, 4, 8]:
- pf = fake_random_pf(64, nprocs = nprocs, fields = _fields)
- streams = Streamlines(pf, cs, length=length)
+ ds = fake_random_ds(64, nprocs = nprocs, fields = _fields)
+ streams = Streamlines(ds, cs, length=length)
streams.integrate_through_volume()
for path in (streams.path(i) for i in range(8)):
yield assert_rel_equal, path['dts'].sum(), 1.0, 14
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/time_series.py
--- a/yt/data_objects/time_series.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/time_series.py Sun Jun 15 19:50:51 2014 -0700
@@ -47,9 +47,9 @@
def __contains__(self, key):
return key in analysis_task_registry
-def get_pf_prop(propname):
- def _eval(params, pf):
- return getattr(pf, propname)
+def get_ds_prop(propname):
+ def _eval(params, ds):
+ return getattr(ds, propname)
cls = type(propname, (AnalysisTask,),
dict(eval = _eval, _params = tuple()))
return cls
@@ -78,7 +78,7 @@
def __getattr__(self, attr):
if attr in attrs:
- return self.data_object.eval(get_pf_prop(attr)())
+ return self.data_object.eval(get_ds_prop(attr)())
raise AttributeError(attr)
class DatasetSeries(object):
@@ -105,26 +105,26 @@
this is set to either True or an integer, it will be iterated with
1 or that integer number of processors assigned to each parameter
file provided to the loop.
- setup_function : callable, accepts a pf
- This function will be called whenever a parameter file is loaded.
+ setup_function : callable, accepts a ds
+ This function will be called whenever a dataset is loaded.
Examples
--------
>>> ts = DatasetSeries(
"GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[0-6][0-9]0")
- >>> for pf in ts:
- ... SlicePlot(pf, "x", "Density").save()
+ >>> for ds in ts:
+ ... SlicePlot(ds, "x", "Density").save()
...
- >>> def print_time(pf):
- ... print pf.current_time
+ >>> def print_time(ds):
+ ... print ds.current_time
...
>>> ts = DatasetSeries(
... "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[0-6][0-9]0",
... setup_function = print_time)
...
- >>> for pf in ts:
- ... SlicePlot(pf, "x", "Density").save()
+ >>> for ds in ts:
+ ... SlicePlot(ds, "x", "Density").save()
"""
def __new__(cls, outputs, *args, **kwargs):
@@ -157,9 +157,9 @@
# We can make this fancier, but this works
for o in self._pre_outputs:
if isinstance(o, types.StringTypes):
- pf = load(o, **self.kwargs)
- self._setup_function(pf)
- yield pf
+ ds = load(o, **self.kwargs)
+ self._setup_function(ds)
+ yield ds
else:
yield o
@@ -207,31 +207,31 @@
----------
storage : dict
This is a dictionary, which will be filled with results during the
- course of the iteration. The keys will be the parameter file
+ course of the iteration. The keys will be the dataset
indices and the values will be whatever is assigned to the *result*
attribute on the storage during iteration.
Examples
--------
Here is an example of iteration when the results do not need to be
- stored. One processor will be assigned to each parameter file.
+ stored. One processor will be assigned to each dataset.
>>> ts = DatasetSeries("DD*/DD*.index")
- >>> for pf in ts.piter():
- ... SlicePlot(pf, "x", "Density").save()
+ >>> for ds in ts.piter():
+ ... SlicePlot(ds, "x", "Density").save()
...
This demonstrates how one might store results:
- >>> def print_time(pf):
- ... print pf.current_time
+ >>> def print_time(ds):
+ ... print ds.current_time
...
>>> ts = DatasetSeries("DD*/DD*.index",
... setup_function = print_time )
...
>>> my_storage = {}
- >>> for sto, pf in ts.piter(storage=my_storage):
- ... v, c = pf.h.find_max("Density")
+ >>> for sto, ds in ts.piter(storage=my_storage):
+ ... v, c = ds.find_max("density")
... sto.result = (v, c)
...
>>> for i, (v, c) in sorted(my_storage.items()):
@@ -242,8 +242,8 @@
>>> ts = DatasetSeries("DD*/DD*.index",
... parallel = 4)
- >>> for pf in ts.piter():
- ... ProjectionPlot(pf, "x", "Density").save()
+ >>> for ds in ts.piter():
+ ... ProjectionPlot(ds, "x", "Density").save()
...
"""
@@ -259,17 +259,17 @@
def eval(self, tasks, obj=None):
tasks = ensure_list(tasks)
return_values = {}
- for store, pf in self.piter(return_values):
+ for store, ds in self.piter(return_values):
store.result = []
for task in tasks:
try:
style = inspect.getargspec(task.eval)[0][1]
- if style == 'pf':
- arg = pf
+ if style == 'ds':
+ arg = ds
elif style == 'data_object':
if obj == None:
obj = DatasetSeriesObject(self, "all_data")
- arg = obj.get(pf)
+ arg = obj.get(ds)
rv = task.eval(arg)
# We catch and store YT-originating exceptions
# This fixes the standard problem of having a sphere that's too
@@ -305,21 +305,21 @@
this is set to either True or an integer, it will be iterated with
1 or that integer number of processors assigned to each parameter
file provided to the loop.
- setup_function : callable, accepts a pf
- This function will be called whenever a parameter file is loaded.
+ setup_function : callable, accepts a ds
+ This function will be called whenever a dataset is loaded.
Examples
--------
- >>> def print_time(pf):
- ... print pf.current_time
+ >>> def print_time(ds):
+ ... print ds.current_time
...
>>> ts = DatasetSeries.from_filenames(
... "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[0-6][0-9]0",
... setup_function = print_time)
...
- >>> for pf in ts:
- ... SlicePlot(pf, "x", "Density").save()
+ >>> for ds in ts:
+ ... SlicePlot(ds, "x", "Density").save()
"""
@@ -370,10 +370,10 @@
def eval(self, tasks):
return self.time_series.eval(tasks, self)
- def get(self, pf):
+ def get(self, ds):
# We get the type name, which corresponds to an attribute of the
# index
- cls = getattr(pf.h, self.data_object_name)
+ cls = getattr(ds, self.data_object_name)
return cls(*self._args, **self._kwargs)
class RegisteredSimulationTimeSeries(type):
@@ -401,7 +401,7 @@
# Set some parameter defaults.
self._set_parameter_defaults()
- # Read the simulation parameter file.
+ # Read the simulation dataset.
self._parse_parameter_file()
# Set units
self._set_units()
@@ -441,7 +441,7 @@
"domain_right_edge", "initial_time", "final_time",
"stop_cycle", "cosmological_simulation"]:
if not hasattr(self, a):
- mylog.error("Missing %s in parameter file definition!", a)
+ mylog.error("Missing %s in dataset definition!", a)
continue
v = getattr(self, a)
mylog.info("Parameters: %-25s = %s", a, v)
@@ -451,7 +451,7 @@
"omega_matter", "hubble_constant",
"initial_redshift", "final_redshift"]:
if not hasattr(self, a):
- mylog.error("Missing %s in parameter file definition!", a)
+ mylog.error("Missing %s in dataset definition!", a)
continue
v = getattr(self, a)
mylog.info("Parameters: %-25s = %s", a, v)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/data_objects/unstructured_mesh.py
--- a/yt/data_objects/unstructured_mesh.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/data_objects/unstructured_mesh.py Sun Jun 15 19:50:51 2014 -0700
@@ -48,13 +48,13 @@
# This is where we set up the connectivity information
self.connectivity_indices = connectivity_indices
self.connectivity_coords = connectivity_coords
- self.pf = index.parameter_file
+ self.ds = index.dataset
self._index = index
self._last_mask = None
self._last_count = -1
self._last_selector_id = None
self._current_particle_type = 'all'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
def _check_consistency(self):
for gi in range(self.connectivity_indices.shape[0]):
@@ -81,7 +81,7 @@
either returns the multiplicative factor or throws a KeyError.
"""
- return self.pf[datatype]
+ return self.ds[datatype]
@property
def shape(self):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/angular_momentum.py
--- a/yt/fields/angular_momentum.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/angular_momentum.py Sun Jun 15 19:50:51 2014 -0700
@@ -37,7 +37,7 @@
def obtain_velocities(data, ftype="gas"):
rv_vec = obtain_rv_vec(data)
# We know that obtain_rv_vec will always access velocity_x
- rv_vec = data.pf.arr(rv_vec, input_units = data[ftype, "velocity_x"].units)
+ rv_vec = data.ds.arr(rv_vec, input_units = data[ftype, "velocity_x"].units)
return rv_vec
@register_field_plugin
@@ -47,7 +47,7 @@
center = data.get_field_parameter('center')
v_vec = obtain_rvec(data)
v_vec = np.rollaxis(v_vec, 0, len(v_vec.shape))
- v_vec = data.pf.arr(v_vec, input_units = data["index", "x"].units)
+ v_vec = data.ds.arr(v_vec, input_units = data["index", "x"].units)
rv = v_vec - center
return yv * rv[...,2] - zv * rv[...,1]
@@ -56,7 +56,7 @@
center = data.get_field_parameter('center')
v_vec = obtain_rvec(data)
v_vec = np.rollaxis(v_vec, 0, len(v_vec.shape))
- v_vec = data.pf.arr(v_vec, input_units = data["index", "x"].units)
+ v_vec = data.ds.arr(v_vec, input_units = data["index", "x"].units)
rv = v_vec - center
return - (xv * rv[...,2] - zv * rv[...,0])
@@ -65,7 +65,7 @@
center = data.get_field_parameter('center')
v_vec = obtain_rvec(data)
v_vec = np.rollaxis(v_vec, 0, len(v_vec.shape))
- v_vec = data.pf.arr(v_vec, input_units = data["index", "x"].units)
+ v_vec = data.ds.arr(v_vec, input_units = data["index", "x"].units)
rv = v_vec - center
return xv * rv[...,1] - yv * rv[...,0]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/astro_fields.py
--- a/yt/fields/astro_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/astro_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -73,7 +73,7 @@
logT0 = np.log10(data[ftype, "temperature"].to_ndarray().astype(np.float64)) - 7
# we get rid of the units here since this is a fit and not an
# analytical expression
- return data.pf.arr(data[ftype, "number_density"].to_ndarray().astype(np.float64)**2
+ return data.ds.arr(data[ftype, "number_density"].to_ndarray().astype(np.float64)**2
* (10**(- 0.0103 * logT0**8 + 0.0417 * logT0**7
- 0.0636 * logT0**6 + 0.1149 * logT0**5
- 0.3151 * logT0**4 + 0.6655 * logT0**3
@@ -94,7 +94,7 @@
def _xray_emissivity(field, data):
# old scaling coefficient was 2.168e60
- return data.pf.arr(data[ftype, "density"].to_ndarray().astype(np.float64)**2
+ return data.ds.arr(data[ftype, "density"].to_ndarray().astype(np.float64)**2
* data[ftype, "temperature"].to_ndarray()**0.5,
"") # add correct units here
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/cosmology_fields.py
--- a/yt/fields/cosmology_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/cosmology_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -61,12 +61,12 @@
# rho_total / rho_cr(z).
def _overdensity(field, data):
- if not hasattr(data.pf, "cosmological_simulation") or \
- not data.pf.cosmological_simulation:
+ if not hasattr(data.ds, "cosmological_simulation") or \
+ not data.ds.cosmological_simulation:
raise NeedsConfiguration("cosmological_simulation", 1)
- co = data.pf.cosmology
+ co = data.ds.cosmology
return data[ftype, "matter_density"] / \
- co.critical_density(data.pf.current_redshift)
+ co.critical_density(data.ds.current_redshift)
registry.add_field((ftype, "overdensity"),
function=_overdensity,
@@ -74,17 +74,17 @@
# rho_baryon / <rho_baryon>
def _baryon_overdensity(field, data):
- if not hasattr(data.pf, "cosmological_simulation") or \
- not data.pf.cosmological_simulation:
+ if not hasattr(data.ds, "cosmological_simulation") or \
+ not data.ds.cosmological_simulation:
raise NeedsConfiguration("cosmological_simulation", 1)
omega_baryon = data.get_field_parameter("omega_baryon")
if omega_baryon is None:
raise NeedsParameter("omega_baryon")
- co = data.pf.cosmology
+ co = data.ds.cosmology
# critical_density(z) ~ omega_lambda + omega_matter * (1 + z)^3
# mean density(z) ~ omega_matter * (1 + z)^3
return data[ftype, "density"] / omega_baryon / co.critical_density(0.0) / \
- (1.0 + data.pf.hubble_constant)**3
+ (1.0 + data.ds.hubble_constant)**3
registry.add_field((ftype, "baryon_overdensity"),
function=_baryon_overdensity,
@@ -93,15 +93,15 @@
# rho_matter / <rho_matter>
def _matter_overdensity(field, data):
- if not hasattr(data.pf, "cosmological_simulation") or \
- not data.pf.cosmological_simulation:
+ if not hasattr(data.ds, "cosmological_simulation") or \
+ not data.ds.cosmological_simulation:
raise NeedsConfiguration("cosmological_simulation", 1)
- co = data.pf.cosmology
+ co = data.ds.cosmology
# critical_density(z) ~ omega_lambda + omega_matter * (1 + z)^3
# mean density(z) ~ omega_matter * (1 + z)^3
- return data[ftype, "density"] / data.pf.omega_matter / \
+ return data[ftype, "density"] / data.ds.omega_matter / \
co.critical_density(0.0) / \
- (1.0 + data.pf.hubble_constant)**3
+ (1.0 + data.ds.hubble_constant)**3
registry.add_field((ftype, "matter_overdensity"),
function=_matter_overdensity,
@@ -111,19 +111,19 @@
# Eqn 4 of Metzler, White, & Loken (2001, ApJ, 547, 560).
# This needs to be checked for accuracy.
def _weak_lensing_convergence(field, data):
- if not hasattr(data.pf, "cosmological_simulation") or \
- not data.pf.cosmological_simulation:
+ if not hasattr(data.ds, "cosmological_simulation") or \
+ not data.ds.cosmological_simulation:
raise NeedsConfiguration("cosmological_simulation", 1)
- co = data.pf.cosmology
+ co = data.ds.cosmology
observer_redshift = data.get_field_parameter('observer_redshift')
source_redshift = data.get_field_parameter('source_redshift')
# observer to lens
- dl = co.angular_diameter_distance(observer_redshift, data.pf.current_redshift)
+ dl = co.angular_diameter_distance(observer_redshift, data.ds.current_redshift)
# observer to source
ds = co.angular_diameter_distance(observer_redshift, source_redshift)
# lens to source
- dls = co.angular_diameter_distance(data.pf.current_redshift, source_redshift)
+ dls = co.angular_diameter_distance(data.ds.current_redshift, source_redshift)
# removed the factor of 1 / a to account for the fact that we are projecting
# with a proper distance.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/derived_field.py
--- a/yt/fields/derived_field.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/derived_field.py Sun Jun 15 19:50:51 2014 -0700
@@ -158,8 +158,8 @@
old_registry = self._unit_registry
if hasattr(data, 'unit_registry'):
ur = data.unit_registry
- elif hasattr(data, 'pf'):
- ur = data.pf.unit_registry
+ elif hasattr(data, 'ds'):
+ ur = data.ds.unit_registry
else:
ur = None
self._unit_registry = ur
@@ -216,7 +216,7 @@
class ValidateParameter(FieldValidator):
def __init__(self, parameters):
"""
- This validator ensures that the parameter file has a given parameter.
+ This validator ensures that the dataset has a given parameter.
"""
FieldValidator.__init__(self)
self.parameters = ensure_list(parameters)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/domain_context.py
--- a/yt/fields/domain_context.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/domain_context.py Sun Jun 15 19:50:51 2014 -0700
@@ -27,6 +27,6 @@
_known_fluid_fields = ()
_known_particle_fields = ()
- def __init__(self, pf):
- self.pf = pf
+ def __init__(self, ds):
+ self.ds = ds
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/field_detector.py
--- a/yt/fields/field_detector.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/field_detector.py Sun Jun 15 19:50:51 2014 -0700
@@ -33,7 +33,7 @@
_id_offset = 0
domain_id = 0
- def __init__(self, nd = 16, pf = None, flat = False):
+ def __init__(self, nd = 16, ds = None, flat = False):
self.nd = nd
self.flat = flat
self._spatial = not flat
@@ -43,22 +43,22 @@
self.LeftEdge = [0.0, 0.0, 0.0]
self.RightEdge = [1.0, 1.0, 1.0]
self.dds = np.ones(3, "float64")
- class fake_parameter_file(defaultdict):
+ class fake_dataset(defaultdict):
pass
- if pf is None:
+ if ds is None:
# required attrs
- pf = fake_parameter_file(lambda: 1)
- pf["Massarr"] = np.ones(6)
- pf.current_redshift = pf.omega_lambda = pf.omega_matter = \
- pf.cosmological_simulation = 0.0
- pf.gamma = 5./3.0
- pf.hubble_constant = 0.7
- pf.domain_left_edge = np.zeros(3, 'float64')
- pf.domain_right_edge = np.ones(3, 'float64')
- pf.dimensionality = 3
- pf.periodicity = (True, True, True)
- self.pf = pf
+ ds = fake_dataset(lambda: 1)
+ ds["Massarr"] = np.ones(6)
+ ds.current_redshift = ds.omega_lambda = ds.omega_matter = \
+ ds.cosmological_simulation = 0.0
+ ds.gamma = 5./3.0
+ ds.hubble_constant = 0.7
+ ds.domain_left_edge = np.zeros(3, 'float64')
+ ds.domain_right_edge = np.ones(3, 'float64')
+ ds.dimensionality = 3
+ ds.periodicity = (True, True, True)
+ self.ds = ds
class fake_index(object):
class fake_io(object):
@@ -87,23 +87,23 @@
return arr.reshape(self.ActiveDimensions, order="C")
def __missing__(self, item):
- if hasattr(self.pf, "field_info"):
+ if hasattr(self.ds, "field_info"):
if not isinstance(item, tuple):
field = ("unknown", item)
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
#mylog.debug("Guessing field %s is %s", item, finfo.name)
else:
field = item
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
# For those cases where we are guessing the field type, we will
# need to re-update -- otherwise, our item will always not have the
# field type. This can lead to, for instance, "unknown" particle
# types not getting correctly identified.
# Note that the *only* way this works is if we also fix our field
# dependencies during checking. Bug #627 talks about this.
- item = self.pf._last_freq
+ item = self.ds._last_freq
else:
- FI = getattr(self.pf, "field_info", FieldInfo)
+ FI = getattr(self.ds, "field_info", FieldInfo)
if item in FI:
finfo = FI[item]
else:
@@ -113,7 +113,7 @@
vv = finfo(self)
except NeedsGridType as exc:
ngz = exc.ghost_zones
- nfd = FieldDetector(self.nd + ngz * 2, pf = self.pf)
+ nfd = FieldDetector(self.nd + ngz * 2, ds = self.ds)
nfd._num_ghost_zones = ngz
vv = finfo(nfd)
if ngz > 0: vv = vv[ngz:-ngz, ngz:-ngz, ngz:-ngz]
@@ -134,12 +134,12 @@
# A vector
self[item] = \
YTArray(np.ones((self.NumberOfParticles, 3)),
- finfo.units, registry=self.pf.unit_registry)
+ finfo.units, registry=self.ds.unit_registry)
else:
# Not a vector
self[item] = \
YTArray(np.ones(self.NumberOfParticles),
- finfo.units, registry=self.pf.unit_registry)
+ finfo.units, registry=self.ds.unit_registry)
self.requested.append(item)
return self[item]
self.requested.append(item)
@@ -155,8 +155,8 @@
def _read_data(self, field_name):
self.requested.append(field_name)
- if hasattr(self.pf, "field_info"):
- finfo = self.pf._get_field_info(*field_name)
+ if hasattr(self.ds, "field_info"):
+ finfo = self.ds._get_field_info(*field_name)
else:
finfo = FieldInfo[field_name]
if finfo.particle_type:
@@ -164,7 +164,7 @@
return np.ones(self.NumberOfParticles)
return YTArray(defaultdict.__missing__(self, field_name),
input_units=finfo.units,
- registry=self.pf.unit_registry)
+ registry=self.ds.unit_registry)
fp_units = {
'bulk_velocity' : 'cm/s',
@@ -181,12 +181,12 @@
def get_field_parameter(self, param, default = None):
self.requested_parameters.append(param)
if param in ['bulk_velocity', 'center', 'normal']:
- return self.pf.arr(np.random.random(3) * 1e-2, self.fp_units[param])
+ return self.ds.arr(np.random.random(3) * 1e-2, self.fp_units[param])
elif param in ['axis']:
return 0
elif param.startswith("cp_"):
ax = param[3]
- rv = self.pf.arr((0.0, 0.0, 0.0), self.fp_units[param])
+ rv = self.ds.arr((0.0, 0.0, 0.0), self.fp_units[param])
rv['xyz'.index(ax)] = 1.0
return rv
elif param.endswith("_hat"):
@@ -205,7 +205,7 @@
id = 1
def apply_units(self, arr, units):
- return self.pf.arr(arr, input_units = units)
+ return self.ds.arr(arr, input_units = units)
def has_field_parameter(self, param):
return True
@@ -219,7 +219,7 @@
fc.shape = (self.nd*self.nd*self.nd, 3)
else:
fc = fc.transpose()
- return self.pf.arr(fc, input_units = "code_length")
+ return self.ds.arr(fc, input_units = "code_length")
@property
def icoords(self):
@@ -244,5 +244,5 @@
fw = np.ones((self.nd**3, 3), dtype="float64") / self.nd
if not self.flat:
fw.shape = (self.nd, self.nd, self.nd, 3)
- return self.pf.arr(fw, input_units = "code_length")
+ return self.ds.arr(fw, input_units = "code_length")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/field_functions.py
--- a/yt/fields/field_functions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/field_functions.py Sun Jun 15 19:50:51 2014 -0700
@@ -17,26 +17,26 @@
def get_radius(data, field_prefix):
center = data.get_field_parameter("center").in_units("cm")
- DW = (data.pf.domain_right_edge - data.pf.domain_left_edge).in_units("cm")
+ DW = (data.ds.domain_right_edge - data.ds.domain_left_edge).in_units("cm")
# This is in cm**2 so it can be the destination for our r later.
- radius2 = data.pf.arr(np.zeros(data[field_prefix+"x"].shape,
+ radius2 = data.ds.arr(np.zeros(data[field_prefix+"x"].shape,
dtype='float64'), 'cm**2')
r = radius2.copy()
- if any(data.pf.periodicity):
+ if any(data.ds.periodicity):
rdw = radius2.copy()
for i, ax in enumerate('xyz'):
# This will coerce the units, so we don't need to worry that we copied
# it from a cm**2 array.
np.subtract(data["%s%s" % (field_prefix, ax)].in_units("cm"),
center[i], r)
- if data.pf.periodicity[i] == True:
+ if data.ds.periodicity[i] == True:
np.abs(r, r)
np.subtract(r, DW[i], rdw)
np.abs(rdw, rdw)
np.minimum(r, rdw, r)
np.power(r, 2.0, r)
np.add(radius2, r, radius2)
- if data.pf.dimensionality < i+1:
+ if data.ds.dimensionality < i+1:
break
# Now it's cm.
np.sqrt(radius2, radius2)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/field_info_container.py Sun Jun 15 19:50:51 2014 -0700
@@ -52,9 +52,9 @@
known_other_fields = ()
known_particle_fields = ()
- def __init__(self, pf, field_list, slice_info = None):
+ def __init__(self, ds, field_list, slice_info = None):
self._show_field_errors = []
- self.pf = pf
+ self.ds = ds
# Now we start setting things up.
self.field_list = field_list
self.slice_info = slice_info
@@ -67,7 +67,7 @@
def setup_particle_fields(self, ptype, ftype='gas', num_neighbors=64 ):
for f, (units, aliases, dn) in sorted(self.known_particle_fields):
- units = self.pf.field_units.get((ptype, f), units)
+ units = self.ds.field_units.get((ptype, f), units)
self.add_output_field((ptype, f),
units = units, particle_type = True, display_name = dn)
if (ptype, f) not in self.field_list:
@@ -94,10 +94,10 @@
if field in self: continue
if not isinstance(field, tuple):
raise RuntimeError
- if field[0] not in self.pf.particle_types:
+ if field[0] not in self.ds.particle_types:
continue
self.add_output_field(field,
- units = self.pf.field_units.get(field, ""),
+ units = self.ds.field_units.get(field, ""),
particle_type = True)
self.setup_smoothed_fields(ptype,
num_neighbors=num_neighbors,
@@ -129,15 +129,15 @@
for field in sorted(self.field_list):
if not isinstance(field, tuple):
raise RuntimeError
- if field[0] in self.pf.particle_types:
+ if field[0] in self.ds.particle_types:
continue
args = known_other_fields.get(
field[1], ("", [], None))
units, aliases, display_name = args
# We allow field_units to override this. First we check if the
# field *name* is in there, then the field *tuple*.
- units = self.pf.field_units.get(field[1], units)
- units = self.pf.field_units.get(field, units)
+ units = self.ds.field_units.get(field[1], units)
+ units = self.ds.field_units.get(field, units)
if not isinstance(units, types.StringTypes) and args[0] != "":
units = "((%s)*%s)" % (args[0], units)
if isinstance(units, (numeric_type, np.number, np.ndarray)) and \
@@ -189,10 +189,10 @@
def find_dependencies(self, loaded):
deps, unavailable = self.check_derived_fields(loaded)
- self.pf.field_dependencies.update(deps)
+ self.ds.field_dependencies.update(deps)
# Note we may have duplicated
- dfl = set(self.pf.derived_field_list).union(deps.keys())
- self.pf.derived_field_list = list(sorted(dfl))
+ dfl = set(self.ds.derived_field_list).union(deps.keys())
+ self.ds.derived_field_list = list(sorted(dfl))
return loaded, unavailable
def add_output_field(self, name, **kwargs):
@@ -204,7 +204,7 @@
# We default to CGS here, but in principle, this can be pluggable
# as well.
u = Unit(self[original_name].units,
- registry = self.pf.unit_registry)
+ registry = self.ds.unit_registry)
units = str(u.get_cgs_equivalent())
self.field_aliases[alias_name] = original_name
self.add_field(alias_name,
@@ -224,21 +224,21 @@
def _gradx(f, data):
grad = data[field][sl,1:-1,1:-1] - data[field][sr,1:-1,1:-1]
- grad /= 2.0*data["dx"].flat[0]*data.pf.units["cm"]
+ grad /= 2.0*data["dx"].flat[0]*data.ds.units["cm"]
g = np.zeros(data[field].shape, dtype='float64')
g[1:-1,1:-1,1:-1] = grad
return g
def _grady(f, data):
grad = data[field][1:-1,sl,1:-1] - data[field][1:-1,sr,1:-1]
- grad /= 2.0*data["dy"].flat[0]*data.pf.units["cm"]
+ grad /= 2.0*data["dy"].flat[0]*data.ds.units["cm"]
g = np.zeros(data[field].shape, dtype='float64')
g[1:-1,1:-1,1:-1] = grad
return g
def _gradz(f, data):
grad = data[field][1:-1,1:-1,sl] - data[field][1:-1,1:-1,sr]
- grad /= 2.0*data["dz"].flat[0]*data.pf.units["cm"]
+ grad /= 2.0*data["dz"].flat[0]*data.ds.units["cm"]
g = np.zeros(data[field].shape, dtype='float64')
g[1:-1,1:-1,1:-1] = grad
return g
@@ -317,7 +317,7 @@
if field not in self: raise RuntimeError
fi = self[field]
try:
- fd = fi.get_dependencies(pf = self.pf)
+ fd = fi.get_dependencies(ds = self.ds)
except Exception as e:
if field in self._show_field_errors:
raise
@@ -335,6 +335,6 @@
fd.requested = set(fd.requested)
deps[field] = fd
mylog.debug("Succeeded with %s (needs %s)", field, fd.requested)
- dfl = set(self.pf.derived_field_list).union(deps.keys())
- self.pf.derived_field_list = list(sorted(dfl))
+ dfl = set(self.ds.derived_field_list).union(deps.keys())
+ self.ds.derived_field_list = list(sorted(dfl))
return deps, unavailable
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/fluid_fields.py
--- a/yt/fields/fluid_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/fluid_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -77,7 +77,7 @@
units="g")
def _sound_speed(field, data):
- tr = data.pf.gamma * data[ftype, "pressure"] / data[ftype, "density"]
+ tr = data.ds.gamma * data[ftype, "pressure"] / data[ftype, "density"]
return np.sqrt(tr)
registry.add_field((ftype, "sound_speed"),
function=_sound_speed,
@@ -122,7 +122,7 @@
def _pressure(field, data):
""" M{(Gamma-1.0)*rho*E} """
- tr = (data.pf.gamma - 1.0) \
+ tr = (data.ds.gamma - 1.0) \
* (data[ftype, "density"] * data[ftype, "thermal_energy"])
return tr
@@ -165,8 +165,8 @@
def _number_density(field, data):
field_data = np.zeros_like(data["gas", "%s_number_density" % \
- data.pf.field_info.species_names[0]])
- for species in data.pf.field_info.species_names:
+ data.ds.field_info.species_names[0]])
+ for species in data.ds.field_info.species_names:
field_data += data["gas", "%s_number_density" % species]
return field_data
registry.add_field((ftype, "number_density"),
@@ -209,7 +209,7 @@
ds = div_fac * data["index", "dx"]
f = data[grad_field][slice_3dr]/ds[slice_3d]
f -= data[grad_field][slice_3dl]/ds[slice_3d]
- new_field = data.pf.arr(np.zeros_like(data[grad_field], dtype=np.float64),
+ new_field = data.ds.arr(np.zeros_like(data[grad_field], dtype=np.float64),
f.units)
new_field[slice_3d] = f
return new_field
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/fluid_vector_fields.py
--- a/yt/fields/fluid_vector_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/fluid_vector_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -84,7 +84,7 @@
f -= (data[ftype, "velocity_y"][sl_center,sl_center,sl_right] -
data[ftype, "velocity_y"][sl_center,sl_center,sl_left]) \
/ (div_fac*just_one(data["index", "dz"].in_cgs()))
- new_field = data.pf.arr(np.zeros_like(data[ftype, "velocity_z"],
+ new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_z"],
dtype=np.float64),
f.units)
new_field[sl_center, sl_center, sl_center] = f
@@ -96,7 +96,7 @@
f -= (data[ftype, "velocity_z"][sl_right,sl_center,sl_center] -
data[ftype, "velocity_z"][sl_left,sl_center,sl_center]) \
/ (div_fac*just_one(data["index", "dx"]))
- new_field = data.pf.arr(np.zeros_like(data[ftype, "velocity_z"],
+ new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_z"],
dtype=np.float64),
f.units)
new_field[sl_center, sl_center, sl_center] = f
@@ -108,7 +108,7 @@
f -= (data[ftype, "velocity_x"][sl_center,sl_right,sl_center] -
data[ftype, "velocity_x"][sl_center,sl_left,sl_center]) \
/ (div_fac*just_one(data["index", "dy"]))
- new_field = data.pf.arr(np.zeros_like(data[ftype, "velocity_z"],
+ new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_z"],
dtype=np.float64),
f.units)
new_field[sl_center, sl_center, sl_center] = f
@@ -168,7 +168,7 @@
result = np.sqrt(data[ftype, "vorticity_growth_x"]**2 +
data[ftype, "vorticity_growth_y"]**2 +
data[ftype, "vorticity_growth_z"]**2)
- dot = data.pf.arr(np.zeros(result.shape), "")
+ dot = data.ds.arr(np.zeros(result.shape), "")
for ax in "xyz":
dot += (data[ftype, "vorticity_%s" % ax] *
data[ftype, "vorticity_growth_%s" % ax]).to_ndarray()
@@ -265,7 +265,7 @@
result = np.sqrt(data[ftype, "vorticity_radiation_pressure_growth_x"]**2 +
data[ftype, "vorticity_radiation_pressure_growth_y"]**2 +
data[ftype, "vorticity_radiation_pressure_growth_z"]**2)
- dot = data.pf.arr(np.zeros(result.shape), "")
+ dot = data.ds.arr(np.zeros(result.shape), "")
for ax in "xyz":
dot += (data[ftype, "vorticity_%s" % ax] *
data[ftype, "vorticity_growth_%s" % ax]).to_ndarray()
@@ -312,7 +312,7 @@
of subtracting them)
"""
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
dvydx = (data[ftype, "velocity_y"][sl_right,sl_center,sl_center] -
data[ftype, "velocity_y"][sl_left,sl_center,sl_center]) \
/ (div_fac*just_one(data["index", "dx"]))
@@ -321,7 +321,7 @@
/ (div_fac*just_one(data["index", "dy"]))
f = (dvydx + dvxdy)**2.0
del dvydx, dvxdy
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
dvzdy = (data[ftype, "velocity_z"][sl_center,sl_right,sl_center] -
data[ftype, "velocity_z"][sl_center,sl_left,sl_center]) \
/ (div_fac*just_one(data["index", "dy"]))
@@ -339,7 +339,7 @@
f += (dvxdz + dvzdx)**2.0
del dvxdz, dvzdx
np.sqrt(f, out=f)
- new_field = data.pf.arr(np.zeros_like(data[ftype, "velocity_x"]), f.units)
+ new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_x"]), f.units)
new_field[sl_center, sl_center, sl_center] = f
return new_field
@@ -383,7 +383,7 @@
(dvx + dvz)^2 ]^(0.5) / c_sound
"""
- if data.pf.dimensionality > 1:
+ if data.ds.dimensionality > 1:
dvydx = (data[ftype, "velocity_y"][sl_right,sl_center,sl_center] -
data[ftype, "velocity_y"][sl_left,sl_center,sl_center]) \
/ div_fac
@@ -392,7 +392,7 @@
/ div_fac
f = (dvydx + dvxdy)**2.0
del dvydx, dvxdy
- if data.pf.dimensionality > 2:
+ if data.ds.dimensionality > 2:
dvzdy = (data[ftype, "velocity_z"][sl_center,sl_right,sl_center] -
data[ftype, "velocity_z"][sl_center,sl_left,sl_center]) \
/ div_fac
@@ -412,7 +412,7 @@
f *= (2.0**data["index", "grid_level"][sl_center, sl_center, sl_center] /
data[ftype, "sound_speed"][sl_center, sl_center, sl_center])**2.0
np.sqrt(f, out=f)
- new_field = data.pf.arr(np.zeros_like(data[ftype, "velocity_x"]), f.units)
+ new_field = data.ds.arr(np.zeros_like(data[ftype, "velocity_x"]), f.units)
new_field[sl_center, sl_center, sl_center] = f
return new_field
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/geometric_fields.py
--- a/yt/fields/geometric_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/geometric_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -93,7 +93,7 @@
### spherical coordinates: r (radius)
def _spherical_r(field, data):
center = data.get_field_parameter("center")
- coords = data.pf.arr(obtain_rvec(data), "code_length")
+ coords = data.ds.arr(obtain_rvec(data), "code_length")
coords[0,...] -= center[0]
coords[1,...] -= center[1]
coords[2,...] -= center[2]
@@ -142,7 +142,7 @@
coords[0,...] -= center[0]
coords[1,...] -= center[1]
coords[2,...] -= center[2]
- return data.pf.arr(get_cyl_r(coords, normal), "code_length").in_cgs()
+ return data.ds.arr(get_cyl_r(coords, normal), "code_length").in_cgs()
registry.add_field(("index", "cylindrical_r"),
function=_cylindrical_r,
@@ -154,7 +154,7 @@
def _cylindrical_z(field, data):
center = data.get_field_parameter("center")
normal = data.get_field_parameter("normal")
- coords = data.pf.arr(obtain_rvec(data), "code_length")
+ coords = data.ds.arr(obtain_rvec(data), "code_length")
coords[0,...] -= center[0]
coords[1,...] -= center[1]
coords[2,...] -= center[2]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/magnetic_field.py
--- a/yt/fields/magnetic_field.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/magnetic_field.py Sun Jun 15 19:50:51 2014 -0700
@@ -77,7 +77,7 @@
def _magnetic_field_poloidal(field,data):
normal = data.get_field_parameter("normal")
d = data[ftype,'magnetic_field_x']
- Bfields = data.pf.arr(
+ Bfields = data.ds.arr(
[data[ftype,'magnetic_field_x'],
data[ftype,'magnetic_field_y'],
data[ftype,'magnetic_field_z']],
@@ -96,7 +96,7 @@
def _magnetic_field_toroidal(field,data):
normal = data.get_field_parameter("normal")
d = data[ftype,'magnetic_field_x']
- Bfields = data.pf.arr(
+ Bfields = data.ds.arr(
[data[ftype,'magnetic_field_x'],
data[ftype,'magnetic_field_y'],
data[ftype,'magnetic_field_z']],
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/particle_fields.py
--- a/yt/fields/particle_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/particle_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -47,10 +47,10 @@
def _field_concat(fname):
def _AllFields(field, data):
v = []
- for ptype in data.pf.particle_types:
- data.pf._last_freq = (ptype, None)
+ for ptype in data.ds.particle_types:
+ data.ds._last_freq = (ptype, None)
if ptype == "all" or \
- ptype in data.pf.known_filters:
+ ptype in data.ds.known_filters:
continue
v.append(data[ptype, fname].copy())
rv = uconcatenate(v, axis=0)
@@ -60,10 +60,10 @@
def _field_concat_slice(fname, axi):
def _AllFields(field, data):
v = []
- for ptype in data.pf.particle_types:
- data.pf._last_freq = (ptype, None)
+ for ptype in data.ds.particle_types:
+ data.ds._last_freq = (ptype, None)
if ptype == "all" or \
- ptype in data.pf.known_filters:
+ ptype in data.ds.known_filters:
continue
v.append(data[ptype, fname][:,axi])
rv = uconcatenate(v, axis=0)
@@ -75,7 +75,7 @@
def particle_count(field, data):
pos = data[ptype, coord_name]
d = data.deposit(pos, method = "count")
- d = data.pf.arr(d, input_units = "cm**-3")
+ d = data.ds.arr(d, input_units = "cm**-3")
return data.apply_units(d, field.units)
registry.add_field(("deposit", "%s_count" % ptype),
@@ -101,7 +101,7 @@
pos.convert_to_units("code_length")
mass.convert_to_units("code_mass")
d = data.deposit(pos, [data[ptype, mass_name]], method = "sum")
- d = data.pf.arr(d, "code_mass")
+ d = data.ds.arr(d, "code_mass")
d /= data["index", "cell_volume"]
return d
@@ -466,7 +466,7 @@
top[bottom == 0] = 0.0
bnz = bottom.nonzero()
top[bnz] /= bottom[bnz]
- d = data.pf.arr(top, input_units = units)
+ d = data.ds.arr(top, input_units = units)
return top
for ax in 'xyz':
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/species_fields.py
--- a/yt/fields/species_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/species_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -135,8 +135,8 @@
def _nuclei_density(field, data):
element = field.name[1][:field.name[1].find("_")]
field_data = np.zeros_like(data["gas", "%s_number_density" %
- data.pf.field_info.species_names[0]])
- for species in data.pf.field_info.species_names:
+ data.ds.field_info.species_names[0]])
+ for species in data.ds.field_info.species_names:
nucleus = species
if "_" in species:
nucleus = species[:species.find("_")]
@@ -166,7 +166,7 @@
def setup_species_fields(registry, ftype = "gas", slice_info = None):
# We have to check what type of field this is -- if it's particles, then we
# set particle_type to True.
- particle_type = ftype not in registry.pf.fluid_types
+ particle_type = ftype not in registry.ds.fluid_types
for species in registry.species_names:
# These are all the species we should be looking for fractions or
# densities of.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/tests/test_fields.py
--- a/yt/fields/tests/test_fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/tests/test_fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -9,36 +9,36 @@
from yt.units.yt_array import YTArray
def setup():
- global base_pf
+ global base_ds
# Make this super teeny tiny
fields, units = [], []
for fname, (code_units, aliases, dn) in StreamFieldInfo.known_other_fields:
fields.append(("gas", fname))
units.append(code_units)
- base_pf = fake_random_pf(4, fields = fields, units = units)
- base_pf.h
- base_pf.cosmological_simulation = 1
- base_pf.cosmology = Cosmology()
+ base_ds = fake_random_ds(4, fields = fields, units = units)
+ base_ds.index
+ base_ds.cosmological_simulation = 1
+ base_ds.cosmology = Cosmology()
from yt.config import ytcfg
ytcfg["yt","__withintesting"] = "True"
np.seterr(all = 'ignore')
-def get_params(pf):
+def get_params(ds):
return dict(
axis = 0,
center = YTArray((0.0, 0.0, 0.0), "cm",
- registry = pf.unit_registry),
+ registry = ds.unit_registry),
bulk_velocity = YTArray((0.0, 0.0, 0.0),
- "cm/s", registry = pf.unit_registry),
+ "cm/s", registry = ds.unit_registry),
normal = YTArray((0.0, 0.0, 1.0),
- "", registry = pf.unit_registry),
+ "", registry = ds.unit_registry),
cp_x_vec = YTArray((1.0, 0.0, 0.0),
- "", registry = pf.unit_registry),
+ "", registry = ds.unit_registry),
cp_y_vec = YTArray((0.0, 1.0, 0.0),
- "", registry = pf.unit_registry),
+ "", registry = ds.unit_registry),
cp_z_vec = YTArray((0.0, 0.0, 1.0),
- "", registry = pf.unit_registry),
+ "", registry = ds.unit_registry),
omega_baryon = 0.04,
observer_redshift = 0.0,
source_redshift = 3.0,
@@ -49,28 +49,28 @@
("gas", "velocity_y"),
("gas", "velocity_z"))
-def realistic_pf(fields, nprocs):
+def realistic_ds(fields, nprocs):
np.random.seed(int(0x4d3d3d3))
- units = [base_pf._get_field_info(*f).units for f in fields]
+ units = [base_ds._get_field_info(*f).units for f in fields]
fields = [_strip_ftype(f) for f in fields]
- pf = fake_random_pf(16, fields = fields, units = units,
+ ds = fake_random_ds(16, fields = fields, units = units,
nprocs = nprocs)
- pf.parameters["HydroMethod"] = "streaming"
- pf.parameters["EOSType"] = 1.0
- pf.parameters["EOSSoundSpeed"] = 1.0
- pf.conversion_factors["Time"] = 1.0
- pf.conversion_factors.update( dict((f, 1.0) for f in fields) )
- pf.gamma = 5.0/3.0
- pf.current_redshift = 0.0001
- pf.cosmological_simulation = 1
- pf.hubble_constant = 0.7
- pf.omega_matter = 0.27
- pf.omega_lambda = 0.73
- pf.cosmology = Cosmology(hubble_constant=pf.hubble_constant,
- omega_matter=pf.omega_matter,
- omega_lambda=pf.omega_lambda,
- unit_registry=pf.unit_registry)
- return pf
+ ds.parameters["HydroMethod"] = "streaming"
+ ds.parameters["EOSType"] = 1.0
+ ds.parameters["EOSSoundSpeed"] = 1.0
+ ds.conversion_factors["Time"] = 1.0
+ ds.conversion_factors.update( dict((f, 1.0) for f in fields) )
+ ds.gamma = 5.0/3.0
+ ds.current_redshift = 0.0001
+ ds.cosmological_simulation = 1
+ ds.hubble_constant = 0.7
+ ds.omega_matter = 0.27
+ ds.omega_lambda = 0.73
+ ds.cosmology = Cosmology(hubble_constant=ds.hubble_constant,
+ omega_matter=ds.omega_matter,
+ omega_lambda=ds.omega_lambda,
+ unit_registry=ds.unit_registry)
+ return ds
def _strip_ftype(field):
if not isinstance(field, tuple):
@@ -103,12 +103,12 @@
self.nproc = nproc
def __call__(self):
- if self.field_name in base_pf.field_list:
+ if self.field_name in base_ds.field_list:
# Don't know how to test this. We need some way of having fields
# that are fallbacks be tested, but we don't have that now.
return
- field = base_pf._get_field_info(*self.field_name)
- deps = field.get_dependencies(pf = base_pf)
+ field = base_ds._get_field_info(*self.field_name)
+ deps = field.get_dependencies(ds = base_ds)
fields = deps.requested + list(_base_fields)
skip_grids = False
needs_spatial = False
@@ -119,11 +119,11 @@
skip_grids = True
if hasattr(v, "ghost_zones"):
needs_spatial = True
- pf = realistic_pf(fields, self.nproc)
+ ds = realistic_ds(fields, self.nproc)
# This gives unequal sized grids as well as subgrids
- dd1 = pf.h.all_data()
- dd2 = pf.h.all_data()
- sp = get_params(pf)
+ dd1 = ds.all_data()
+ dd2 = ds.all_data()
+ sp = get_params(ds)
dd1.field_parameters.update(sp)
dd2.field_parameters.update(sp)
v1 = dd1[self.field_name]
@@ -136,7 +136,7 @@
res = dd2.apply_units(res, field.units)
assert_array_almost_equal_nulp(v1, res, 4)
if not skip_grids:
- for g in pf.index.grids:
+ for g in ds.index.grids:
g.field_parameters.update(sp)
v1 = g[self.field_name]
g.clear_data()
@@ -152,10 +152,10 @@
assert_array_almost_equal_nulp(v1, res, 4)
def test_all_fields():
- for field in sorted(base_pf.field_info):
+ for field in sorted(base_ds.field_info):
if not isinstance(field, types.TupleType):
field = ("unknown", field)
- finfo = base_pf._get_field_info(*field)
+ finfo = base_ds._get_field_info(*field)
if isinstance(field, types.TupleType):
fname = field[0]
else:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/fields/vector_operations.py
--- a/yt/fields/vector_operations.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/fields/vector_operations.py Sun Jun 15 19:50:51 2014 -0700
@@ -46,9 +46,9 @@
xn, yn, zn = [(ftype, "%s_%s" % (basename, ax)) for ax in 'xyz']
# Is this safe?
- if registry.pf.dimensionality < 3:
+ if registry.ds.dimensionality < 3:
zn = ("index", "zeros")
- if registry.pf.dimensionality < 2:
+ if registry.ds.dimensionality < 2:
yn = ("index", "zeros")
def _magnitude(field, data):
@@ -68,9 +68,9 @@
xn, yn, zn = [(ftype, "%s_%s" % (basename, ax)) for ax in 'xyz']
# Is this safe?
- if registry.pf.dimensionality < 3:
+ if registry.ds.dimensionality < 3:
zn = ("index", "zeros")
- if registry.pf.dimensionality < 2:
+ if registry.ds.dimensionality < 2:
yn = ("index", "zeros")
def _squared(field, data):
@@ -102,9 +102,9 @@
xn, yn, zn = [(ftype, "%s_%s" % (basename, ax)) for ax in 'xyz']
# Is this safe?
- if registry.pf.dimensionality < 3:
+ if registry.ds.dimensionality < 3:
zn = ("index", "zeros")
- if registry.pf.dimensionality < 2:
+ if registry.ds.dimensionality < 2:
yn = ("index", "zeros")
create_magnitude_field(registry, basename, field_units,
@@ -158,7 +158,7 @@
ds = div_fac * just_one(data["index", "dz"])
f += data[zn][1:-1,1:-1,sl_right]/ds
f -= data[zn][1:-1,1:-1,sl_left ]/ds
- new_field = data.pf.arr(np.zeros(data[xn].shape, dtype=np.float64),
+ new_field = data.ds.arr(np.zeros(data[xn].shape, dtype=np.float64),
f.units)
new_field[1:-1,1:-1,1:-1] = f
return new_field
@@ -231,10 +231,10 @@
def _averaged_field(field, data):
nx, ny, nz = data[(ftype, basename)].shape
- new_field = data.pf.arr(np.zeros((nx-2, ny-2, nz-2), dtype=np.float64),
+ new_field = data.ds.arr(np.zeros((nx-2, ny-2, nz-2), dtype=np.float64),
(just_one(data[(ftype, basename)]) *
just_one(data[(ftype, weight)])).units)
- weight_field = data.pf.arr(np.zeros((nx-2, ny-2, nz-2), dtype=np.float64),
+ weight_field = data.ds.arr(np.zeros((nx-2, ny-2, nz-2), dtype=np.float64),
data[(ftype, weight)].units)
i_i, j_i, k_i = np.mgrid[0:3, 0:3, 0:3]
@@ -245,7 +245,7 @@
weight_field += data[(ftype, weight)][sl]
# Now some fancy footwork
- new_field2 = data.pf.arr(np.zeros((nx, ny, nz)),
+ new_field2 = data.ds.arr(np.zeros((nx, ny, nz)),
data[(ftype, basename)].units)
new_field2[1:-1, 1:-1, 1:-1] = new_field / weight_field
return new_field2
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/_skeleton/data_structures.py
--- a/yt/frontends/_skeleton/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/_skeleton/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -45,13 +45,13 @@
grid = SkeletonGrid
- def __init__(self, pf, dataset_type='skeleton'):
+ def __init__(self, ds, dataset_type='skeleton'):
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ # for now, the index file is the dataset!
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
- AMRHierarchy.__init__(self, pf, dataset_type)
+ AMRHierarchy.__init__(self, ds, dataset_type)
def _initialize_data_storage(self):
pass
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/_skeleton/fields.py
--- a/yt/frontends/_skeleton/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/_skeleton/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -33,12 +33,12 @@
# ( "name", ("units", ["fields", "to", "alias"], # "display_name")),
)
- def __init__(self, pf):
- super(SkeletonFieldInfo, self).__init__(pf)
+ def __init__(self, ds):
+ super(SkeletonFieldInfo, self).__init__(ds)
# If you want, you can check self.field_list
def setup_fluid_fields(self):
- # Here we do anything that might need info about the parameter file.
+ # Here we do anything that might need info about the dataset.
# You can use self.alias, self.add_output_field and self.add_field .
pass
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/art/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -58,22 +58,22 @@
class ARTIndex(OctreeIndex):
- def __init__(self, pf, dataset_type="art"):
+ def __init__(self, ds, dataset_type="art"):
self.fluid_field_list = fluid_fields
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
- self.max_level = pf.max_level
+ self.max_level = ds.max_level
self.float_type = np.float64
- super(ARTIndex, self).__init__(pf, dataset_type)
+ super(ARTIndex, self).__init__(ds, dataset_type)
def get_smallest_dx(self):
"""
Returns (in code units) the smallest cell size in the simulation.
"""
# Overloaded
- pf = self.parameter_file
+ ds = self.dataset
return (1.0/pf.domain_dimensions.astype('f8') /
(2**self.max_level)).min()
@@ -84,12 +84,12 @@
"""
nv = len(self.fluid_field_list)
self.oct_handler = ARTOctreeContainer(
- self.parameter_file.domain_dimensions/2, # dd is # of root cells
- self.parameter_file.domain_left_edge,
- self.parameter_file.domain_right_edge,
+ self.dataset.domain_dimensions/2, # dd is # of root cells
+ self.dataset.domain_left_edge,
+ self.dataset.domain_right_edge,
1)
# The 1 here refers to domain_id == 1 always for ARTIO.
- self.domains = [ARTDomainFile(self.parameter_file, nv,
+ self.domains = [ARTDomainFile(self.dataset, nv,
self.oct_handler, 1)]
self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
self.total_octs = sum(self.octs_per_domain)
@@ -104,17 +104,17 @@
self.particle_field_list = [f for f in particle_fields]
self.field_list = [("gas", f) for f in fluid_fields]
# now generate all of the possible particle fields
- if "wspecies" in self.parameter_file.parameters.keys():
- wspecies = self.parameter_file.parameters['wspecies']
+ if "wspecies" in self.dataset.parameters.keys():
+ wspecies = self.dataset.parameters['wspecies']
nspecies = len(wspecies)
- self.parameter_file.particle_types = ["darkmatter", "stars"]
+ self.dataset.particle_types = ["darkmatter", "stars"]
for specie in range(nspecies):
- self.parameter_file.particle_types.append("specie%i" % specie)
- self.parameter_file.particle_types_raw = tuple(
- self.parameter_file.particle_types)
+ self.dataset.particle_types.append("specie%i" % specie)
+ self.dataset.particle_types_raw = tuple(
+ self.dataset.particle_types)
else:
- self.parameter_file.particle_types = []
- for ptype in self.parameter_file.particle_types:
+ self.dataset.particle_types = []
+ for ptype in self.dataset.particle_types:
for pfield in self.particle_field_list:
pfn = (ptype, pfield)
self.field_list.append(pfn)
@@ -132,7 +132,7 @@
base_region = getattr(dobj, "base_region", dobj)
if len(domains) > 1:
mylog.debug("Identified %s intersecting domains", len(domains))
- subsets = [ARTDomainSubset(base_region, domain, self.parameter_file)
+ subsets = [ARTDomainSubset(base_region, domain, self.dataset)
for domain in domains]
dobj._chunk_info = subsets
dobj._current_chunk = list(self._chunk_all(dobj))[0]
@@ -420,7 +420,7 @@
the order they are in in the octhandler.
"""
oct_handler = self.oct_handler
- all_fields = self.domain.pf.index.fluid_field_list
+ all_fields = self.domain.ds.index.fluid_field_list
fields = [f for ft, f in ftfields]
field_idxs = [all_fields.index(f) for f in fields]
source, tr = {}, {}
@@ -431,10 +431,10 @@
tr[field] = np.zeros(cell_count, 'float64')
data = _read_root_level(content, self.domain.level_child_offsets,
self.domain.level_count)
- ns = (self.domain.pf.domain_dimensions.prod() / 8, 8)
+ ns = (self.domain.ds.domain_dimensions.prod() / 8, 8)
for field, fi in zip(fields, field_idxs):
source[field] = np.empty(ns, dtype="float64", order="C")
- dt = data[fi,:].reshape(self.domain.pf.domain_dimensions,
+ dt = data[fi,:].reshape(self.domain.ds.domain_dimensions,
order="F")
for i in range(2):
for j in range(2):
@@ -446,15 +446,15 @@
oct_handler.fill_level(0, levels, cell_inds, file_inds, tr, source)
del source
# Now we continue with the additional levels.
- for level in range(1, self.pf.max_level + 1):
+ for level in range(1, self.ds.max_level + 1):
no = self.domain.level_count[level]
noct_range = [0, no]
source = _read_child_level(
content, self.domain.level_child_offsets,
self.domain.level_offsets,
self.domain.level_count, level, fields,
- self.domain.pf.domain_dimensions,
- self.domain.pf.parameters['ncell0'],
+ self.domain.ds.domain_dimensions,
+ self.domain.ds.parameters['ncell0'],
noct_range=noct_range)
oct_handler.fill_level(level, levels, cell_inds, file_inds, tr,
source)
@@ -470,9 +470,9 @@
_last_mask = None
_last_seletor_id = None
- def __init__(self, pf, nvar, oct_handler, domain_id):
+ def __init__(self, ds, nvar, oct_handler, domain_id):
self.nvar = nvar
- self.pf = pf
+ self.ds = ds
self.domain_id = domain_id
self._level_count = None
self._level_oct_offsets = None
@@ -502,13 +502,13 @@
if self._level_oct_offsets is not None:
return self._level_oct_offsets
# We now have to open the file and calculate it
- f = open(self.pf._file_amr, "rb")
+ f = open(self.ds._file_amr, "rb")
nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
- self._count_art_octs(f, self.pf.child_grid_offset,
- self.pf.min_level, self.pf.max_level)
+ self._count_art_octs(f, self.ds.child_grid_offset,
+ self.ds.min_level, self.ds.max_level)
# remember that the root grid is by itself; manually add it back in
- inoll[0] = self.pf.domain_dimensions.prod()/8
- _level_child_offsets[0] = self.pf.root_grid_offset
+ inoll[0] = self.ds.domain_dimensions.prod()/8
+ _level_child_offsets[0] = self.ds.root_grid_offset
self.nhydrovars = nhydrovars
self.inoll = inoll # number of octs
self._level_oct_offsets = _level_oct_offsets
@@ -563,13 +563,13 @@
oct_handler.add
"""
self.level_offsets
- f = open(self.pf._file_amr, "rb")
- for level in range(1, self.pf.max_level + 1):
+ f = open(self.ds._file_amr, "rb")
+ for level in range(1, self.ds.max_level + 1):
unitary_center, fl, iocts, nocts, root_level = \
_read_art_level_info( f,
self._level_oct_offsets, level,
- coarse_grid=self.pf.domain_dimensions[0],
- root_level=self.pf.root_level)
+ coarse_grid=self.ds.domain_dimensions[0],
+ root_level=self.ds.root_level)
nocts_check = oct_handler.add(self.domain_id, level,
unitary_center)
assert(nocts_check == nocts)
@@ -578,9 +578,9 @@
def _read_amr_root(self, oct_handler):
self.level_offsets
- f = open(self.pf._file_amr, "rb")
+ f = open(self.ds._file_amr, "rb")
# add the root *cell* not *oct* mesh
- root_octs_side = self.pf.domain_dimensions[0]/2
+ root_octs_side = self.ds.domain_dimensions[0]/2
NX = np.ones(3)*root_octs_side
octs_side = NX*2 # Level == 0
LE = np.array([0.0, 0.0, 0.0], dtype='float64')
@@ -602,5 +602,5 @@
return True
if getattr(selector, "domain_id", None) is not None:
return selector.domain_id == self.domain_id
- domain_ids = self.pf.index.oct_handler.domain_identify(selector)
+ domain_ids = self.ds.index.oct_handler.domain_identify(selector)
return self.domain_id in domain_ids
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/art/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -49,7 +49,7 @@
for chunk in chunks:
for subset in chunk.objs:
# Now we read the entire thing
- f = open(subset.domain.pf._file_amr, "rb")
+ f = open(subset.domain.ds._file_amr, "rb")
# This contains the boundary information, so we skim through
# and pick off the right vectors
rv = subset.fill(f, fields, selector)
@@ -69,9 +69,9 @@
key = (selector, ftype)
if key in self.masks.keys() and self.caching:
return self.masks[key]
- pf = self.pf
+ ds = self.ds
ptmax = self.ws[-1]
- pbool, idxa, idxb = _determine_field_size(pf, ftype, self.ls, ptmax)
+ pbool, idxa, idxb = _determine_field_size(ds, ftype, self.ls, ptmax)
pstr = 'particle_position_%s'
x,y,z = [self._get_field((ftype, pstr % ax)) for ax in 'xyz']
mask = selector.select_points(x, y, z, 0.0)
@@ -89,7 +89,7 @@
tr = {}
ftype, fname = field
ptmax = self.ws[-1]
- pbool, idxa, idxb = _determine_field_size(self.pf, ftype,
+ pbool, idxa, idxb = _determine_field_size(self.ds, ftype,
self.ls, ptmax)
npa = idxb - idxa
sizes = np.diff(np.concatenate(([0], self.ls)))
@@ -98,7 +98,7 @@
idxb=idxb, fields=ax)
for i, ax in enumerate('xyz'):
if fname.startswith("particle_position_%s" % ax):
- dd = self.pf.domain_dimensions[0]
+ dd = self.ds.domain_dimensions[0]
off = 1.0/dd
tr[field] = rp([ax])[0]/dd - off
if fname.startswith("particle_velocity_%s" % ax):
@@ -134,7 +134,7 @@
self.file_stars,
self.tb,
self.ages,
- self.pf.current_time)
+ self.ds.current_time)
temp = tr.get(field, np.zeros(npa, 'f8'))
temp[-nstars:] = data
tr[field] = temp
@@ -149,12 +149,12 @@
def _read_particle_selection(self, chunks, selector, fields):
chunk = chunks.next()
- self.pf = chunk.objs[0].domain.pf
- self.ws = self.pf.parameters["wspecies"]
- self.ls = self.pf.parameters["lspecies"]
- self.file_particle = self.pf._file_particle_data
- self.file_stars = self.pf._file_particle_stars
- self.Nrow = self.pf.parameters["Nrow"]
+ self.ds = chunk.objs[0].domain.ds
+ self.ws = self.ds.parameters["wspecies"]
+ self.ls = self.ds.parameters["lspecies"]
+ self.file_particle = self.ds._file_particle_data
+ self.file_stars = self.ds._file_particle_stars
+ self.Nrow = self.ds.parameters["Nrow"]
data = {f:np.array([]) for f in fields}
for f in fields:
ftype, fname = f
@@ -163,7 +163,7 @@
data[f] = np.concatenate((arr, data[f]))
return data
-def _determine_field_size(pf, field, lspecies, ptmax):
+def _determine_field_size(ds, field, lspecies, ptmax):
pbool = np.zeros(len(lspecies), dtype="bool")
idxas = np.concatenate(([0, ], lspecies[:-1]))
idxbs = lspecies
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/art/tests/test_outputs.py
--- a/yt/frontends/art/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/art/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -26,10 +26,10 @@
d9p = "D9p_500/10MpcBox_HartGal_csf_a0.500.d"
-@requires_pf(d9p, big_data=True)
+@requires_ds(d9p, big_data=True)
def test_d9p():
- pf = data_dir_load(d9p)
- yield assert_equal, str(pf), "10MpcBox_HartGal_csf_a0.500.d"
+ ds = data_dir_load(d9p)
+ yield assert_equal, str(ds), "10MpcBox_HartGal_csf_a0.500.d"
for test in big_patch_amr(d9p, _fields):
test_d9p.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/artio/data_structures.py
--- a/yt/frontends/artio/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/artio/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -48,21 +48,21 @@
class ARTIOOctreeSubset(OctreeSubset):
_domain_offset = 0
domain_id = -1
- _con_args = ("base_region", "sfc_start", "sfc_end", "oct_handler", "pf")
+ _con_args = ("base_region", "sfc_start", "sfc_end", "oct_handler", "ds")
_type_name = 'octree_subset'
_num_zones = 2
- def __init__(self, base_region, sfc_start, sfc_end, oct_handler, pf):
+ def __init__(self, base_region, sfc_start, sfc_end, oct_handler, ds):
self.field_data = YTFieldData()
self.field_parameters = {}
self.sfc_start = sfc_start
self.sfc_end = sfc_end
self.oct_handler = oct_handler
- self.pf = pf
+ self.ds = ds
self._last_mask = None
self._last_selector_id = None
self._current_particle_type = 'all'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
self.base_region = base_region
self.base_selector = base_region.selector
@@ -93,7 +93,7 @@
def fill_particles(self, fields):
if len(fields) == 0: return {}
- ptype_indices = self.pf.particle_types
+ ptype_indices = self.ds.particle_types
art_fields = [(ptype_indices.index(ptype), fname) for
ptype, fname in fields]
species_data = self.oct_handler.fill_sfc_particles(art_fields)
@@ -126,7 +126,7 @@
def fill(self, fields, selector):
# We know how big these will be.
if len(fields) == 0: return []
- handle = self.pf._handle
+ handle = self.ds._handle
field_indices = [handle.parameters["grid_variable_labels"].index(f)
for (ft, f) in fields]
tr = self.oct_handler.fill_sfc(selector, field_indices)
@@ -155,20 +155,20 @@
class ARTIOIndex(Index):
- def __init__(self, pf, dataset_type='artio'):
+ def __init__(self, ds, dataset_type='artio'):
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ # for now, the index file is the dataset!
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
- self.max_level = pf.max_level
+ self.max_level = ds.max_level
self.float_type = np.float64
- super(ARTIOIndex, self).__init__(pf, dataset_type)
+ super(ARTIOIndex, self).__init__(ds, dataset_type)
@property
def max_range(self):
- return self.parameter_file.max_range
+ return self.dataset.max_range
def _setup_geometry(self):
mylog.debug("Initializing Geometry Handler empty for now.")
@@ -180,7 +180,7 @@
return 1.0/(2**self.max_level)
def convert(self, unit):
- return self.parameter_file.conversion_factors[unit]
+ return self.dataset.conversion_factors[unit]
def find_max(self, field, finest_levels=3):
"""
@@ -201,8 +201,8 @@
source.quantities["MaxLocation"](field)
mylog.info("Max Value is %0.5e at %0.16f %0.16f %0.16f",
max_val, mx, my, mz)
- self.pf.parameters["Max%sValue" % (field)] = max_val
- self.pf.parameters["Max%sPos" % (field)] = "%s" % ((mx, my, mz),)
+ self.ds.parameters["Max%sValue" % (field)] = max_val
+ self.ds.parameters["Max%sPos" % (field)] = "%s" % ((mx, my, mz),)
return max_val, np.array((mx, my, mz), dtype='float64')
def _detect_output_fields(self):
@@ -213,21 +213,21 @@
def _detect_fluid_fields(self):
return [("artio", f) for f in
- self.pf.artio_parameters["grid_variable_labels"]]
+ self.ds.artio_parameters["grid_variable_labels"]]
def _detect_particle_fields(self):
fields = set()
- for i, ptype in enumerate(self.pf.particle_types):
+ for i, ptype in enumerate(self.ds.particle_types):
if ptype == "all": break # This will always be after all intrinsic
- for fname in self.pf.particle_variables[i]:
+ for fname in self.ds.particle_variables[i]:
fields.add((ptype, fname))
return list(fields)
def _identify_base_chunk(self, dobj):
if getattr(dobj, "_chunk_info", None) is None:
try:
- all_data = all(dobj.left_edge == self.pf.domain_left_edge) and\
- all(dobj.right_edge == self.pf.domain_right_edge)
+ all_data = all(dobj.left_edge == self.ds.domain_left_edge) and\
+ all(dobj.right_edge == self.ds.domain_right_edge)
except:
all_data = False
base_region = getattr(dobj, "base_region", dobj)
@@ -236,30 +236,30 @@
nz = getattr(dobj, "_num_zones", 0)
if all_data:
mylog.debug("Selecting entire artio domain")
- list_sfc_ranges = self.pf._handle.root_sfc_ranges_all(
+ list_sfc_ranges = self.ds._handle.root_sfc_ranges_all(
max_range_size = self.max_range)
elif sfc_start is not None and sfc_end is not None:
mylog.debug("Restricting to %s .. %s", sfc_start, sfc_end)
list_sfc_ranges = [(sfc_start, sfc_end)]
else:
mylog.debug("Running selector on artio base grid")
- list_sfc_ranges = self.pf._handle.root_sfc_ranges(
+ list_sfc_ranges = self.ds._handle.root_sfc_ranges(
dobj.selector, max_range_size = self.max_range)
ci = []
#v = np.array(list_sfc_ranges)
#list_sfc_ranges = [ (v.min(), v.max()) ]
for (start, end) in list_sfc_ranges:
range_handler = ARTIOSFCRangeHandler(
- self.pf.domain_dimensions,
- self.pf.domain_left_edge, self.pf.domain_right_edge,
- self.pf._handle, start, end)
+ self.ds.domain_dimensions,
+ self.ds.domain_left_edge, self.ds.domain_right_edge,
+ self.ds._handle, start, end)
range_handler.construct_mesh()
if nz != 2:
ci.append(ARTIORootMeshSubset(base_region, start, end,
- range_handler.root_mesh_handler, self.pf))
+ range_handler.root_mesh_handler, self.ds))
if nz != 1 and range_handler.total_octs > 0:
ci.append(ARTIOOctreeSubset(base_region, start, end,
- range_handler.octree_handler, self.pf))
+ range_handler.octree_handler, self.ds))
dobj._chunk_info = ci
if len(list_sfc_ranges) > 1:
mylog.info("Created %d chunks for ARTIO" % len(list_sfc_ranges))
@@ -417,7 +417,7 @@
self.parameters['unit_t'] = self.artio_parameters["time_unit"][0]
self.parameters['unit_m'] = self.artio_parameters["mass_unit"][0]
- # hard coded assumption of 3D periodicity (add to parameter file)
+ # hard coded assumption of 3D periodicity (add to dataset)
self.periodicity = (True, True, True)
@classmethod
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/artio/tests/test_outputs.py
--- a/yt/frontends/artio/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/artio/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
data_dir_load, \
PixelizedProjectionValuesTest, \
FieldValuesTest, \
@@ -27,11 +27,11 @@
("deposit", "all_density"), ("deposit", "all_count"))
sizmbhloz = "sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art"
-@requires_pf(sizmbhloz)
+@requires_ds(sizmbhloz)
def test_sizmbhloz():
- pf = data_dir_load(sizmbhloz)
- pf.max_range = 1024*1024
- yield assert_equal, str(pf), "sizmbhloz-clref04SNth-rs9_a0.9011.art"
+ ds = data_dir_load(sizmbhloz)
+ ds.max_range = 1024*1024
+ yield assert_equal, str(ds), "sizmbhloz-clref04SNth-rs9_a0.9011.art"
dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
for ds in dso:
for field in _fields:
@@ -41,7 +41,7 @@
sizmbhloz, axis, field, weight_field,
ds)
yield FieldValuesTest(sizmbhloz, field, ds)
- dobj = create_obj(pf, ds)
+ dobj = create_obj(ds, ds)
s1 = dobj["ones"].sum()
s2 = sum(mask.sum() for block, mask in dobj.blocks)
yield assert_equal, s1, s2
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/athena/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -40,7 +40,7 @@
class AthenaGrid(AMRGridPatch):
_id_offset = 0
def __init__(self, id, index, level, start, dimensions):
- df = index.parameter_file.filename[4:-4]
+ df = index.dataset.filename[4:-4]
gname = index.grid_filenames[id]
AMRGridPatch.__init__(self, id, filename = gname,
index = index)
@@ -57,13 +57,13 @@
# that dx=dy=dz , at least here. We probably do elsewhere.
id = self.id - self._id_offset
if len(self.Parent) > 0:
- self.dds = self.Parent[0].dds / self.pf.refine_by
+ self.dds = self.Parent[0].dds / self.ds.refine_by
else:
LE, RE = self.index.grid_left_edge[id,:], \
self.index.grid_right_edge[id,:]
- self.dds = self.pf.arr((RE-LE)/self.ActiveDimensions, "code_length")
- if self.pf.dimensionality < 2: self.dds[1] = 1.0
- if self.pf.dimensionality < 3: self.dds[2] = 1.0
+ self.dds = self.ds.arr((RE-LE)/self.ActiveDimensions, "code_length")
+ if self.ds.dimensionality < 2: self.dds[1] = 1.0
+ if self.ds.dimensionality < 3: self.dds[2] = 1.0
self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
def __repr__(self):
@@ -102,15 +102,15 @@
_dataset_type='athena'
_data_file = None
- def __init__(self, pf, dataset_type='athena'):
- self.parameter_file = weakref.proxy(pf)
- self.directory = os.path.dirname(self.parameter_file.filename)
+ def __init__(self, ds, dataset_type='athena'):
+ self.dataset = weakref.proxy(ds)
+ self.directory = os.path.dirname(self.dataset.filename)
self.dataset_type = dataset_type
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.filename
+ # for now, the index file is the dataset!
+ self.index_filename = self.dataset.filename
#self.directory = os.path.dirname(self.index_filename)
self._fhandle = file(self.index_filename,'rb')
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
self._fhandle.close()
@@ -172,7 +172,7 @@
self._field_map = field_map
def _count_grids(self):
- self.num_grids = self.parameter_file.nvtk
+ self.num_grids = self.dataset.nvtk
def _parse_index(self):
f = open(self.index_filename,'rb')
@@ -271,48 +271,48 @@
# Now we convert the glis, which were left edges (floats), to indices
# from the domain left edge. Then we do a bunch of fixing now that we
# know the extent of all the grids.
- glis = np.round((glis - self.parameter_file.domain_left_edge.ndarray_view())/gdds).astype('int')
+ glis = np.round((glis - self.dataset.domain_left_edge.ndarray_view())/gdds).astype('int')
new_dre = np.max(gres,axis=0)
- self.parameter_file.domain_right_edge[:] = np.round(new_dre, decimals=12)[:]
- self.parameter_file.domain_width = \
- (self.parameter_file.domain_right_edge -
- self.parameter_file.domain_left_edge)
- self.parameter_file.domain_center = \
- 0.5*(self.parameter_file.domain_left_edge +
- self.parameter_file.domain_right_edge)
- self.parameter_file.domain_dimensions = \
- np.round(self.parameter_file.domain_width/gdds[0]).astype('int')
+ self.dataset.domain_right_edge[:] = np.round(new_dre, decimals=12)[:]
+ self.dataset.domain_width = \
+ (self.dataset.domain_right_edge -
+ self.dataset.domain_left_edge)
+ self.dataset.domain_center = \
+ 0.5*(self.dataset.domain_left_edge +
+ self.dataset.domain_right_edge)
+ self.dataset.domain_dimensions = \
+ np.round(self.dataset.domain_width/gdds[0]).astype('int')
- # Need to reset the units in the parameter file based on the correct
+ # Need to reset the units in the dataset based on the correct
# domain left/right/dimensions.
- self.parameter_file._set_code_unit_attributes()
+ self.dataset._set_code_unit_attributes()
- if self.parameter_file.dimensionality <= 2 :
- self.parameter_file.domain_dimensions[2] = np.int(1)
- if self.parameter_file.dimensionality == 1 :
- self.parameter_file.domain_dimensions[1] = np.int(1)
+ if self.dataset.dimensionality <= 2 :
+ self.dataset.domain_dimensions[2] = np.int(1)
+ if self.dataset.dimensionality == 1 :
+ self.dataset.domain_dimensions[1] = np.int(1)
for i in range(levels.shape[0]):
self.grids[i] = self.grid(i,self,levels[i],
glis[i],
gdims[i])
- dx = (self.parameter_file.domain_right_edge-
- self.parameter_file.domain_left_edge)/self.parameter_file.domain_dimensions
- dx = dx/self.parameter_file.refine_by**(levels[i])
+ dx = (self.dataset.domain_right_edge-
+ self.dataset.domain_left_edge)/self.dataset.domain_dimensions
+ dx = dx/self.dataset.refine_by**(levels[i])
dxs.append(dx)
- dx = self.pf.arr(dxs, "code_length")
- dle = self.parameter_file.domain_left_edge
- dre = self.parameter_file.domain_right_edge
- self.grid_left_edge = self.pf.arr(np.round(dle + dx*glis, decimals=12), "code_length")
+ dx = self.ds.arr(dxs, "code_length")
+ dle = self.dataset.domain_left_edge
+ dre = self.dataset.domain_right_edge
+ self.grid_left_edge = self.ds.arr(np.round(dle + dx*glis, decimals=12), "code_length")
self.grid_dimensions = gdims.astype("int32")
- self.grid_right_edge = self.pf.arr(np.round(self.grid_left_edge +
+ self.grid_right_edge = self.ds.arr(np.round(self.grid_left_edge +
dx*self.grid_dimensions,
decimals=12),
"code_length")
- if self.parameter_file.dimensionality <= 2:
+ if self.dataset.dimensionality <= 2:
self.grid_right_edge[:,2] = dre[2]
- if self.parameter_file.dimensionality == 1:
+ if self.dataset.dimensionality == 1:
self.grid_right_edge[:,1:] = dre[1:]
self.grid_particle_count = np.zeros([self.num_grids, 1], dtype='int64')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/athena/fields.py
--- a/yt/frontends/athena/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/athena/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -74,7 +74,7 @@
eint -= emag(data)
return eint
def etot_from_pres(data):
- etot = data["athena","pressure"]/(data.pf.gamma-1.)
+ etot = data["athena","pressure"]/(data.ds.gamma-1.)
etot += ekin2(data)
if ("athena","cell_centered_B_x") in self.field_list:
etot += emag(data)
@@ -86,7 +86,7 @@
units="dyne/cm**2")
def _thermal_energy(field, data):
return data["athena","pressure"] / \
- (data.pf.gamma-1.)/data["athena","density"]
+ (data.ds.gamma-1.)/data["athena","density"]
self.add_field(("gas","thermal_energy"),
function=_thermal_energy,
units="erg/g")
@@ -97,7 +97,7 @@
units="erg/g")
elif ("athena","total_energy") in self.field_list:
def _pressure(field, data):
- return eint_from_etot(data)*(data.pf.gamma-1.0)
+ return eint_from_etot(data)*(data.ds.gamma-1.0)
self.add_field(("gas","pressure"), function=_pressure,
units="dyne/cm**2")
def _thermal_energy(field, data):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/athena/io.py
--- a/yt/frontends/athena/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/athena/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -45,7 +45,7 @@
grid_dims = grid.ActiveDimensions
grid0_ncells = np.prod(grid.index.grid_dimensions[0,:])
read_table_offset = get_read_table_offset(f)
- for field in self.pf.field_list:
+ for field in self.ds.field_list:
dtype, offsetr = grid.index._field_map[field]
if grid_ncells != grid0_ncells:
offset = offsetr + ((grid_ncells-grid0_ncells) * (offsetr//grid0_ncells))
@@ -63,7 +63,7 @@
v = v[1::3].reshape(grid_dims,order='F')
elif '_z' in field[-1]:
v = v[2::3].reshape(grid_dims,order='F')
- if grid.pf.field_ordering == 1:
+ if grid.ds.field_ordering == 1:
data[grid.id][field] = v.T.astype("float64")
else:
data[grid.id][field] = v.astype("float64")
@@ -73,7 +73,7 @@
def _read_data_slice(self, grid, field, axis, coord):
sl = [slice(None), slice(None), slice(None)]
sl[axis] = slice(coord, coord + 1)
- if grid.pf.field_ordering == 1:
+ if grid.ds.field_ordering == 1:
sl.reverse()
return self._read_data_set(grid, field)[sl]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/boxlib/data_structures.py
--- a/yt/frontends/boxlib/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/boxlib/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -89,7 +89,7 @@
def _setup_dx(self):
# has already been read in and stored in index
- self.dds = self.index.pf.arr(self.index.level_dds[self.Level, :], 'code_length')
+ self.dds = self.index.ds.arr(self.index.level_dds[self.Level, :], 'code_length')
self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
def __repr__(self):
@@ -122,12 +122,12 @@
mask = self._get_selector_mask(dobj.selector)
if mask is None: return np.empty(0, dtype='int64')
coords = np.empty(self._last_count, dtype='int64')
- coords[:] = self.Level + self.pf.level_offsets[self.Level]
+ coords[:] = self.Level + self.ds.level_offsets[self.Level]
return coords
# Override this as well, since refine_by can vary
def _fill_child_mask(self, child, mask, tofill, dlevel = 1):
- rf = self.pf.ref_factors[self.Level]
+ rf = self.ds.ref_factors[self.Level]
if dlevel != 1:
raise NotImplementedError
gi, cgi = self.get_global_startindex(), child.get_global_startindex()
@@ -141,12 +141,12 @@
class BoxlibHierarchy(GridIndex):
grid = BoxlibGrid
- def __init__(self, pf, dataset_type='boxlib_native'):
+ def __init__(self, ds, dataset_type='boxlib_native'):
self.dataset_type = dataset_type
- self.header_filename = os.path.join(pf.output_dir, 'Header')
- self.directory = pf.output_dir
+ self.header_filename = os.path.join(ds.output_dir, 'Header')
+ self.directory = ds.output_dir
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
self._cache_endianness(self.grids[-1])
#self._read_particles()
@@ -155,16 +155,16 @@
"""
read the global header file for an Boxlib plotfile output.
"""
- self.max_level = self.parameter_file._max_level
+ self.max_level = self.dataset._max_level
header_file = open(self.header_filename,'r')
- self.dimensionality = self.parameter_file.dimensionality
+ self.dimensionality = self.dataset.dimensionality
_our_dim_finder = _dim_finder[self.dimensionality-1]
- DRE = self.parameter_file.domain_right_edge # shortcut
- DLE = self.parameter_file.domain_left_edge # shortcut
+ DRE = self.dataset.domain_right_edge # shortcut
+ DLE = self.dataset.domain_left_edge # shortcut
# We can now skip to the point in the file we want to start parsing.
- header_file.seek(self.parameter_file._header_mesh_start)
+ header_file.seek(self.dataset._header_mesh_start)
dx = []
for i in range(self.max_level + 1):
@@ -176,10 +176,10 @@
dx[i].append(DRE[2] - DLE[1])
self.level_dds = np.array(dx, dtype="float64")
header_file.next()
- if self.pf.geometry == "cartesian":
+ if self.ds.geometry == "cartesian":
default_ybounds = (0.0, 1.0)
default_zbounds = (0.0, 1.0)
- elif self.pf.geometry == "cylindrical":
+ elif self.ds.geometry == "cylindrical":
# Now we check for dimensionality issues
if self.dimensionality != 2:
raise RuntimeError("yt needs cylindrical to be 2D")
@@ -212,7 +212,7 @@
self.grid_left_edge[grid_counter + gi, :] = [xlo, ylo, zlo]
self.grid_right_edge[grid_counter + gi, :] = [xhi, yhi, zhi]
# Now we get to the level header filename, which we open and parse.
- fn = os.path.join(self.parameter_file.output_dir,
+ fn = os.path.join(self.dataset.output_dir,
header_file.next().strip())
level_header_file = open(fn + "_H")
level_dir = os.path.dirname(fn)
@@ -320,9 +320,9 @@
# duplicating some work done elsewhere. In a future where we don't
# pre-allocate grid arrays, this becomes unnecessary.
header_file = open(self.header_filename, 'r')
- header_file.seek(self.parameter_file._header_mesh_start)
+ header_file.seek(self.dataset._header_mesh_start)
# Skip over the level dxs, geometry and the zero:
- [header_file.next() for i in range(self.parameter_file._max_level + 3)]
+ [header_file.next() for i in range(self.dataset._max_level + 3)]
# Now we need to be very careful, as we've seeked, and now we iterate.
# Does this work? We are going to count the number of places that we
# have a three-item line. The three items would be level, number of
@@ -348,7 +348,7 @@
def _detect_output_fields(self):
# This is all done in _parse_header_file
self.field_list = [("boxlib", f) for f in
- self.parameter_file._field_list]
+ self.dataset._field_list]
self.field_indexes = dict((f[1], i)
for i, f in enumerate(self.field_list))
# There are times when field_list may change. We copy it here to
@@ -356,7 +356,7 @@
self.field_order = [f for f in self.field_list]
def _setup_data_io(self):
- self.io = io_registry[self.dataset_type](self.parameter_file)
+ self.io = io_registry[self.dataset_type](self.dataset)
class BoxlibDataset(Dataset):
"""
@@ -637,8 +637,8 @@
class OrionHierarchy(BoxlibHierarchy):
- def __init__(self, pf, dataset_type='orion_native'):
- BoxlibHierarchy.__init__(self, pf, dataset_type)
+ def __init__(self, ds, dataset_type='orion_native'):
+ BoxlibHierarchy.__init__(self, ds, dataset_type)
self._read_particles()
#self.io = IOHandlerOrion
@@ -654,7 +654,7 @@
self.grid_particle_count = np.zeros(len(self.grids))
for particle_filename in ["StarParticles", "SinkParticles"]:
- fn = os.path.join(self.pf.output_dir, particle_filename)
+ fn = os.path.join(self.ds.output_dir, particle_filename)
if os.path.exists(fn): self._read_particle_file(fn)
def _read_particle_file(self, fn):
@@ -805,19 +805,19 @@
class NyxHierarchy(BoxlibHierarchy):
- def __init__(self, pf, dataset_type='nyx_native'):
- super(NyxHierarchy, self).__init__(pf, dataset_type)
+ def __init__(self, ds, dataset_type='nyx_native'):
+ super(NyxHierarchy, self).__init__(ds, dataset_type)
self._read_particle_header()
def _read_particle_header(self):
- if not self.pf.parameters["particles.write_in_plotfile"]:
+ if not self.ds.parameters["particles.write_in_plotfile"]:
self.pgrid_info = np.zeros((self.num_grids, 3), dtype='int64')
return
for fn in ['particle_position_%s' % ax for ax in 'xyz'] + \
['particle_mass'] + \
['particle_velocity_%s' % ax for ax in 'xyz']:
self.field_list.append(("io", fn))
- header = open(os.path.join(self.pf.output_dir, "DM", "Header"))
+ header = open(os.path.join(self.ds.output_dir, "DM", "Header"))
version = header.readline()
ndim = header.readline()
nfields = header.readline()
@@ -835,7 +835,7 @@
# we need grid_info in `populate_grid_objects`, so save it to self
for g, pg in itertools.izip(self.grids, grid_info):
- g.particle_filename = os.path.join(self.pf.output_dir, "DM",
+ g.particle_filename = os.path.join(self.ds.output_dir, "DM",
"Level_%s" % (g.Level),
"DATA_%04i" % pg[0])
g.NumberOfParticles = pg[1]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/boxlib/fields.py
--- a/yt/frontends/boxlib/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/boxlib/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -40,8 +40,8 @@
return data["thermal_energy_density"] / data["density"]
def _temperature(field,data):
- mu = data.pf.parameters["mu"]
- gamma = data.pf.parameters["gamma"]
+ mu = data.ds.parameters["mu"]
+ gamma = data.ds.parameters["gamma"]
tr = data["thermal_energy_density"] / data["density"]
tr *= mu * amu_cgs / boltzmann_constant_cgs
tr *= (gamma - 1.0)
@@ -146,7 +146,7 @@
def setup_fluid_fields(self):
# add X's
- for _, field in self.pf.field_list:
+ for _, field in self.ds.field_list:
if field.startswith("X("):
# We have a fraction
nice_name = field[2:-1]
@@ -221,7 +221,7 @@
def setup_fluid_fields(self):
# pick the correct temperature field
- if self.pf.parameters["use_tfromp"]:
+ if self.ds.parameters["use_tfromp"]:
self.alias(("gas", "temperature"), ("boxlib", "tfromp"),
units = "K")
else:
@@ -229,7 +229,7 @@
units = "K")
# Add X's and omegadots, units of 1/s
- for _, field in self.pf.field_list:
+ for _, field in self.ds.field_list:
if field.startswith("X("):
# We have a fraction
nice_name = field[2:-1]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/boxlib/io.py
--- a/yt/frontends/boxlib/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/boxlib/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -26,8 +26,8 @@
_dataset_type = "boxlib_native"
- def __init__(self, pf, *args, **kwargs):
- self.pf = pf
+ def __init__(self, ds, *args, **kwargs):
+ self.ds = ds
def _read_fluid_selection(self, chunks, selector, fields, size):
chunks = list(chunks)
@@ -58,7 +58,7 @@
if g.filename is None:
continue
grids_by_file[g.filename].append(g)
- dtype = self.pf.index._dtype
+ dtype = self.ds.index._dtype
bpr = dtype.itemsize
for filename in grids_by_file:
grids = grids_by_file[filename]
@@ -69,7 +69,7 @@
grid._seek(f)
count = grid.ActiveDimensions.prod()
size = count * bpr
- for field in self.pf.index.field_order:
+ for field in self.ds.index.field_order:
if field in fields:
# We read it ...
v = np.fromfile(f, dtype=dtype, count=count)
@@ -88,9 +88,9 @@
"""
- fn = grid.pf.fullplotdir + "/StarParticles"
+ fn = grid.ds.fullplotdir + "/StarParticles"
if not os.path.exists(fn):
- fn = grid.pf.fullplotdir + "/SinkParticles"
+ fn = grid.ds.fullplotdir + "/SinkParticles"
# Figure out the format of the particle file
with open(fn, 'r') as f:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/boxlib/tests/test_orion.py
--- a/yt/frontends/boxlib/tests/test_orion.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/boxlib/tests/test_orion.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -26,19 +26,19 @@
_fields = ("temperature", "density", "velocity_magnitude")
radadvect = "RadAdvect/plt00000"
-@requires_pf(radadvect)
+@requires_ds(radadvect)
def test_radadvect():
- pf = data_dir_load(radadvect)
- yield assert_equal, str(pf), "plt00000"
+ ds = data_dir_load(radadvect)
+ yield assert_equal, str(ds), "plt00000"
for test in small_patch_amr(radadvect, _fields):
test_radadvect.__name__ = test.description
yield test
rt = "RadTube/plt00500"
-@requires_pf(rt)
+@requires_ds(rt)
def test_radtube():
- pf = data_dir_load(rt)
- yield assert_equal, str(pf), "plt00500"
+ ds = data_dir_load(rt)
+ yield assert_equal, str(ds), "plt00500"
for test in small_patch_amr(rt, _fields):
test_radtube.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/chombo/data_structures.py
--- a/yt/frontends/chombo/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/chombo/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -62,18 +62,18 @@
if self.start_index is not None:
return self.start_index
if self.Parent is None:
- iLE = self.LeftEdge - self.pf.domain_left_edge
+ iLE = self.LeftEdge - self.ds.domain_left_edge
start_index = iLE / self.dds
return np.rint(start_index).astype('int64').ravel()
pdx = self.Parent[0].dds
start_index = (self.Parent[0].get_global_startindex()) + \
np.rint((self.LeftEdge - self.Parent[0].LeftEdge)/pdx)
- self.start_index = (start_index*self.pf.refine_by).astype('int64').ravel()
+ self.start_index = (start_index*self.ds.refine_by).astype('int64').ravel()
return self.start_index
def _setup_dx(self):
# has already been read in and stored in index
- self.dds = self.pf.arr(self.index.dds_list[self.Level], "code_length")
+ self.dds = self.ds.arr(self.index.dds_list[self.Level], "code_length")
@property
def Parent(self):
@@ -92,27 +92,27 @@
grid = ChomboGrid
_data_file = None
- def __init__(self,pf,dataset_type='chombo_hdf5'):
- self.domain_left_edge = pf.domain_left_edge
- self.domain_right_edge = pf.domain_right_edge
+ def __init__(self,ds,dataset_type='chombo_hdf5'):
+ self.domain_left_edge = ds.domain_left_edge
+ self.domain_right_edge = ds.domain_right_edge
self.dataset_type = dataset_type
- if pf.dimensionality == 1:
+ if ds.dimensionality == 1:
self.dataset_type = "chombo1d_hdf5"
- if pf.dimensionality == 2:
+ if ds.dimensionality == 2:
self.dataset_type = "chombo2d_hdf5"
self.field_indexes = {}
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
+ self.dataset = weakref.proxy(ds)
+ # for now, the index file is the dataset!
self.index_filename = os.path.abspath(
- self.parameter_file.parameter_filename)
- self.directory = pf.fullpath
- self._handle = pf._handle
+ self.dataset.parameter_filename)
+ self.directory = ds.fullpath
+ self._handle = ds._handle
self.float_type = self._handle['Chombo_global'].attrs['testReal'].dtype.name
self._levels = [key for key in self._handle.keys() if key.startswith('level')]
- GridIndex.__init__(self,pf,dataset_type)
+ GridIndex.__init__(self,ds,dataset_type)
self._read_particles()
@@ -167,7 +167,7 @@
grids = []
self.dds_list = []
i = 0
- D = self.parameter_file.dimensionality
+ D = self.dataset.dimensionality
for lev_index, lev in enumerate(self._levels):
level_number = int(re.match('level_(\d+)',lev).groups()[0])
try:
@@ -367,8 +367,8 @@
class Orion2Hierarchy(ChomboHierarchy):
- def __init__(self, pf, dataset_type="orion_chombo_native"):
- ChomboHierarchy.__init__(self, pf, dataset_type)
+ def __init__(self, ds, dataset_type="orion_chombo_native"):
+ ChomboHierarchy.__init__(self, ds, dataset_type)
def _read_particles(self):
self.particle_filename = self.index_filename[:-4] + 'sink'
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/chombo/io.py
--- a/yt/frontends/chombo/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/chombo/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -26,10 +26,10 @@
_offset_string = 'data:offsets=0'
_data_string = 'data:datatype=0'
- def __init__(self, pf, *args, **kwargs):
- BaseIOHandler.__init__(self, pf, *args, **kwargs)
- self.pf = pf
- self._handle = pf._handle
+ def __init__(self, ds, *args, **kwargs):
+ BaseIOHandler.__init__(self, ds, *args, **kwargs)
+ self.ds = ds
+ self._handle = ds._handle
_field_dict = None
@property
@@ -160,8 +160,8 @@
offsets = np.array(offsets, dtype=np.int64)
# convert between the global grid id and the id on this level
- grid_levels = np.array([g.Level for g in self.pf.index.grids])
- grid_ids = np.array([g.id for g in self.pf.index.grids])
+ grid_levels = np.array([g.Level for g in self.ds.index.grids])
+ grid_ids = np.array([g.id for g in self.ds.index.grids])
grid_level_offset = grid_ids[np.where(grid_levels == grid.Level)[0][0]]
lo = grid.id - grid_level_offset
hi = lo + 1
@@ -178,20 +178,20 @@
_offset_string = 'data:offsets=0'
_data_string = 'data:datatype=0'
- def __init__(self, pf, *args, **kwargs):
- BaseIOHandler.__init__(self, pf, *args, **kwargs)
- self.pf = pf
- self._handle = pf._handle
+ def __init__(self, ds, *args, **kwargs):
+ BaseIOHandler.__init__(self, ds, *args, **kwargs)
+ self.ds = ds
+ self._handle = ds._handle
class IOHandlerChombo1DHDF5(IOHandlerChomboHDF5):
_dataset_type = "chombo1d_hdf5"
_offset_string = 'data:offsets=0'
_data_string = 'data:datatype=0'
- def __init__(self, pf, *args, **kwargs):
- BaseIOHandler.__init__(self, pf, *args, **kwargs)
- self.pf = pf
- self._handle = pf._handle
+ def __init__(self, ds, *args, **kwargs):
+ BaseIOHandler.__init__(self, ds, *args, **kwargs)
+ self.ds = ds
+ self._handle = ds._handle
class IOHandlerOrion2HDF5(IOHandlerChomboHDF5):
_dataset_type = "orion_chombo_native"
@@ -202,7 +202,7 @@
"""
- fn = grid.pf.fullplotdir[:-4] + "sink"
+ fn = grid.ds.fullplotdir[:-4] + "sink"
# Figure out the format of the particle file
with open(fn, 'r') as f:
@@ -253,7 +253,7 @@
def read(line, field):
return float(line.strip().split(' ')[index[field]])
- fn = grid.pf.fullplotdir[:-4] + "sink"
+ fn = grid.ds.fullplotdir[:-4] + "sink"
with open(fn, 'r') as f:
lines = f.readlines()
particles = []
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/chombo/tests/test_outputs.py
--- a/yt/frontends/chombo/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/chombo/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -25,19 +25,19 @@
"magnetic_field_x")
gc = "GaussianCloud/data.0077.3d.hdf5"
-@requires_pf(gc)
+@requires_ds(gc)
def test_gc():
- pf = data_dir_load(gc)
- yield assert_equal, str(pf), "data.0077.3d.hdf5"
+ ds = data_dir_load(gc)
+ yield assert_equal, str(ds), "data.0077.3d.hdf5"
for test in small_patch_amr(gc, _fields):
test_gc.__name__ = test.description
yield test
tb = "TurbBoxLowRes/data.0005.3d.hdf5"
-@requires_pf(tb)
+@requires_ds(tb)
def test_tb():
- pf = data_dir_load(tb)
- yield assert_equal, str(pf), "data.0005.3d.hdf5"
+ ds = data_dir_load(tb)
+ yield assert_equal, str(ds), "data.0005.3d.hdf5"
for test in small_patch_amr(tb, _fields):
test_tb.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/answer_testing_support.py
--- a/yt/frontends/enzo/answer_testing_support.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/answer_testing_support.py Sun Jun 15 19:50:51 2014 -0700
@@ -19,7 +19,7 @@
from yt.utilities.answer_testing.framework import \
AnswerTestingTest, \
- can_run_pf, \
+ can_run_ds, \
FieldValuesTest, \
GridHierarchyTest, \
GridValuesTest, \
@@ -47,23 +47,23 @@
return ftrue
return ffalse
-def standard_small_simulation(pf_fn, fields):
- if not can_run_pf(pf_fn): return
+def standard_small_simulation(ds_fn, fields):
+ if not can_run_ds(ds_fn): return
dso = [None]
tolerance = ytcfg.getint("yt", "answer_testing_tolerance")
bitwise = ytcfg.getboolean("yt", "answer_testing_bitwise")
for field in fields:
if bitwise:
- yield GridValuesTest(pf_fn, field)
+ yield GridValuesTest(ds_fn, field)
if 'particle' in field: continue
for ds in dso:
for axis in [0, 1, 2]:
for weight_field in [None, "Density"]:
yield ProjectionValuesTest(
- pf_fn, axis, field, weight_field,
+ ds_fn, axis, field, weight_field,
ds, decimals=tolerance)
yield FieldValuesTest(
- pf_fn, field, ds, decimals=tolerance)
+ ds_fn, field, ds, decimals=tolerance)
class ShockTubeTest(object):
def __init__(self, data_file, solution_file, fields,
@@ -77,11 +77,11 @@
self.atol = atol
def __call__(self):
- # Read in the pf
- pf = load(self.data_file)
+ # Read in the ds
+ ds = load(self.data_file)
exact = self.get_analytical_solution()
- ad = pf.h.all_data()
+ ad = ds.all_data()
position = ad['x']
for k in self.fields:
field = ad[k]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -76,7 +76,7 @@
space-filling tiling of grids, possibly due to the finite accuracy in a
standard Enzo index file.
"""
- rf = self.pf.refine_by
+ rf = self.ds.refine_by
my_ind = self.id - self._id_offset
le = self.LeftEdge
self.dds = self.Parent.dds/rf
@@ -139,7 +139,7 @@
def retrieve_ghost_zones(self, n_zones, fields, all_levels=False,
smoothed=False):
- NGZ = self.pf.parameters.get("NumberOfGhostZones", 3)
+ NGZ = self.ds.parameters.get("NumberOfGhostZones", 3)
if n_zones > NGZ:
return EnzoGrid.retrieve_ghost_zones(
self, n_zones, fields, all_levels, smoothed)
@@ -150,8 +150,8 @@
# than the grid by nZones*dx in each direction
nl = self.get_global_startindex() - n_zones
nr = nl + self.ActiveDimensions + 2*n_zones
- new_left_edge = nl * self.dds + self.pf.domain_left_edge
- new_right_edge = nr * self.dds + self.pf.domain_left_edge
+ new_left_edge = nl * self.dds + self.ds.domain_left_edge
+ new_right_edge = nr * self.dds + self.ds.domain_left_edge
# Something different needs to be done for the root grid, though
level = self.Level
args = (level, new_left_edge, new_right_edge)
@@ -181,9 +181,9 @@
for field in ensure_list(fields):
if field in self.field_list:
conv_factor = 1.0
- if self.pf.field_info.has_key(field):
- conv_factor = self.pf.field_info[field]._convert_function(self)
- if self.pf.field_info[field].particle_type: continue
+ if self.ds.field_info.has_key(field):
+ conv_factor = self.ds.field_info[field]._convert_function(self)
+ if self.ds.field_info[field].particle_type: continue
temp = self.index.io._read_raw_data_set(self, field)
temp = temp.swapaxes(0, 2)
cube.field_data[field] = np.multiply(temp, conv_factor, temp)[sl]
@@ -195,29 +195,29 @@
grid = EnzoGrid
_preload_implemented = True
- def __init__(self, pf, dataset_type):
+ def __init__(self, ds, dataset_type):
self.dataset_type = dataset_type
- if pf.file_style != None:
- self._bn = pf.file_style
+ if ds.file_style != None:
+ self._bn = ds.file_style
else:
self._bn = "%s.cpu%%04i"
self.index_filename = os.path.abspath(
- "%s.hierarchy" % (pf.parameter_filename))
+ "%s.hierarchy" % (ds.parameter_filename))
if os.path.getsize(self.index_filename) == 0:
raise IOError(-1,"File empty", self.index_filename)
self.directory = os.path.dirname(self.index_filename)
# For some reason, r8 seems to want Float64
- if pf.has_key("CompilerPrecision") \
- and pf["CompilerPrecision"] == "r4":
+ if ds.has_key("CompilerPrecision") \
+ and ds["CompilerPrecision"] == "r4":
self.float_type = 'float32'
else:
self.float_type = 'float64'
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
# sync it back
- self.parameter_file.dataset_type = self.dataset_type
+ self.dataset.dataset_type = self.dataset_type
def _count_grids(self):
self.num_grids = None
@@ -236,7 +236,7 @@
test_grid_id = int(line.split("=")[-1])
if test_grid is not None:
break
- self._guess_dataset_type(self.pf.dimensionality, test_grid, test_grid_id)
+ self._guess_dataset_type(self.ds.dimensionality, test_grid, test_grid_id)
def _guess_dataset_type(self, rank, test_grid, test_grid_id):
if test_grid[0] != os.path.sep:
@@ -278,8 +278,8 @@
si, ei, LE, RE, fn, npart = [], [], [], [], [], []
all = [si, ei, LE, RE, fn]
pbar = get_pbar("Parsing Hierarchy ", self.num_grids)
- version = self.parameter_file.parameters.get("VersionNumber", None)
- params = self.parameter_file.parameters
+ version = self.dataset.parameters.get("VersionNumber", None)
+ params = self.dataset.parameters
if version is None and "Internal" in params:
version = float(params["Internal"]["Provenance"]["VersionNumber"])
if version >= 3.0:
@@ -400,10 +400,10 @@
self.max_level = self.grid_levels.max()
def _detect_active_particle_fields(self):
- ap_list = self.parameter_file["AppendActiveParticleType"]
+ ap_list = self.dataset["AppendActiveParticleType"]
_fields = dict((ap, []) for ap in ap_list)
fields = []
- for ptype in self.parameter_file["AppendActiveParticleType"]:
+ for ptype in self.dataset["AppendActiveParticleType"]:
select_grids = self.grid_active_particle_count[ptype].flat
if np.any(select_grids) == False:
continue
@@ -421,16 +421,16 @@
def _setup_derived_fields(self):
super(EnzoHierarchy, self)._setup_derived_fields()
- aps = self.parameter_file.parameters.get(
+ aps = self.dataset.parameters.get(
"AppendActiveParticleType", [])
- for fname, field in self.pf.field_info.items():
+ for fname, field in self.ds.field_info.items():
if not field.particle_type: continue
if isinstance(fname, tuple): continue
if field._function is NullFunc: continue
for apt in aps:
dd = field._copy_def()
dd.pop("name")
- self.pf.field_info.add_field((apt, fname), **dd)
+ self.ds.field_info.add_field((apt, fname), **dd)
def _detect_output_fields(self):
self.field_list = []
@@ -448,7 +448,7 @@
continue
mylog.debug("Grid %s has: %s", grid.id, gf)
field_list = field_list.union(gf)
- if "AppendActiveParticleType" in self.parameter_file.parameters:
+ if "AppendActiveParticleType" in self.dataset.parameters:
ap_fields = self._detect_active_particle_fields()
field_list = list(set(field_list).union(ap_fields))
else:
@@ -489,7 +489,7 @@
additional_fields = ['metallicity_fraction', 'creation_time',
'dynamical_time']
pfields = [f for f in self.field_list if f.startswith('particle_')]
- nattr = self.parameter_file['NumberOfParticleAttributes']
+ nattr = self.dataset['NumberOfParticleAttributes']
if nattr > 0:
pfields += additional_fields[:nattr]
# Find where the particles reside and count them
@@ -538,13 +538,13 @@
self._enzo = enzo
return self._enzo
- def __init__(self, pf, dataset_type = None):
+ def __init__(self, ds, dataset_type = None):
self.dataset_type = dataset_type
self.float_type = 'float64'
- self.parameter_file = weakref.proxy(pf) # for _obtain_enzo
+ self.dataset = weakref.proxy(ds) # for _obtain_enzo
self.float_type = self.enzo.hierarchy_information["GridLeftEdge"].dtype
self.directory = os.getcwd()
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
def _initialize_data_storage(self):
pass
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -95,10 +95,10 @@
("level", ("", [], None)),
)
- def __init__(self, pf, field_list):
- hydro_method = pf.parameters.get("HydroMethod", None)
+ def __init__(self, ds, field_list):
+ hydro_method = ds.parameters.get("HydroMethod", None)
if hydro_method is None:
- hydro_method = pf.parameters["Physics"]["Hydro"]["HydroMethod"]
+ hydro_method = ds.parameters["Physics"]["Hydro"]["HydroMethod"]
if hydro_method == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
@@ -108,7 +108,7 @@
sl_right = slice(2,None,None)
div_fac = 2.0
slice_info = (sl_left, sl_right, div_fac)
- super(EnzoFieldInfo, self).__init__(pf, field_list, slice_info)
+ super(EnzoFieldInfo, self).__init__(ds, field_list, slice_info)
def add_species_field(self, species):
# This is currently specific to Enzo. Hopefully in the future we will
@@ -145,7 +145,7 @@
def setup_fluid_fields(self):
# Now we conditionally load a few other things.
- params = self.pf.parameters
+ params = self.ds.parameters
multi_species = params.get("MultiSpecies", None)
if multi_species is None:
multi_species = params["Physics"]["AtomicPhysics"]["MultiSpecies"]
@@ -157,7 +157,7 @@
# We check which type of field we need, and then we add it.
ge_name = None
te_name = None
- params = self.pf.parameters
+ params = self.ds.parameters
multi_species = params.get("MultiSpecies", None)
if multi_species is None:
multi_species = params["Physics"]["AtomicPhysics"]["MultiSpecies"]
@@ -219,7 +219,7 @@
def setup_particle_fields(self, ptype):
def _age(field, data):
- return data.pf.current_time - data["creation_time"]
+ return data.ds.current_time - data["creation_time"]
self.add_field((ptype, "age"), function = _age,
particle_type = True,
units = "yr")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/io.py
--- a/yt/frontends/enzo/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -38,7 +38,7 @@
f = h5py.File(grid.filename, "r")
group = f["/Grid%08i" % grid.id]
fields = []
- add_io = "io" in grid.pf.particle_types
+ add_io = "io" in grid.ds.particle_types
for name, v in group.iteritems():
# NOTE: This won't work with 1D datasets or references.
if not hasattr(v, "shape") or v.dtype == "O":
@@ -170,12 +170,12 @@
# Split into particles and non-particles
fluid_fields, particle_fields = [], []
for ftype, fname in fields:
- if ftype in self.pf.particle_types:
+ if ftype in self.ds.particle_types:
particle_fields.append((ftype, fname))
else:
fluid_fields.append((ftype, fname))
if len(particle_fields) > 0:
- selector = AlwaysSelector(self.pf)
+ selector = AlwaysSelector(self.ds)
rv.update(self._read_particle_selection(
[chunk], selector, particle_fields))
if len(fluid_fields) == 0: return rv
@@ -208,7 +208,7 @@
def __init__(self, *args, **kwargs):
super(IOHandlerPackgedHDF5GhostZones, self).__init__(*args, **kwargs)
- NGZ = self.pf.parameters.get("NumberOfGhostZones", 3)
+ NGZ = self.ds.parameters.get("NumberOfGhostZones", 3)
self._base = (slice(NGZ, -NGZ),
slice(NGZ, -NGZ),
slice(NGZ, -NGZ))
@@ -223,8 +223,8 @@
_dataset_type = "enzo_inline"
- def __init__(self, pf, ghost_zones=3):
- self.pf = pf
+ def __init__(self, ds, ghost_zones=3):
+ self.ds = ds
import enzo
self.enzo = enzo
self.grids_in_memory = enzo.grid_data
@@ -232,11 +232,11 @@
self.my_slice = (slice(ghost_zones,-ghost_zones),
slice(ghost_zones,-ghost_zones),
slice(ghost_zones,-ghost_zones))
- BaseIOHandler.__init__(self, pf)
+ BaseIOHandler.__init__(self, ds)
def _read_field_names(self, grid):
fields = []
- add_io = "io" in grid.pf.particle_types
+ add_io = "io" in grid.ds.particle_types
for name, v in self.grids_in_memory[grid.id].items():
# NOTE: This won't work with 1D datasets or references.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/simulation_handling.py
--- a/yt/frontends/enzo/simulation_handling.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/simulation_handling.py Sun Jun 15 19:50:51 2014 -0700
@@ -66,14 +66,14 @@
>>> from yt.mods import *
>>> es = EnzoSimulation("my_simulation.par")
>>> es.get_time_series()
- >>> for pf in es:
- ... print pf.current_time
+ >>> for ds in es:
+ ... print ds.current_time
>>> from yt.mods import *
>>> es = simulation("my_simulation.par", "Enzo")
>>> es.get_time_series()
- >>> for pf in es:
- ... print pf.current_time
+ >>> for ds in es:
+ ... print ds.current_time
"""
@@ -181,8 +181,8 @@
integer is supplied, the work will be divided into that
number of jobs.
Default: True.
- setup_function : callable, accepts a pf
- This function will be called whenever a parameter file is loaded.
+ setup_function : callable, accepts a ds
+ This function will be called whenever a dataset is loaded.
Examples
--------
@@ -200,16 +200,16 @@
>>> es.get_time_series(find_outputs=True)
>>> # after calling get_time_series
- >>> for pf in es.piter():
- ... p = ProjectionPlot(pf, 'x', "density")
+ >>> for ds in es.piter():
+ ... p = ProjectionPlot(ds, 'x', "density")
... p.save()
>>> # An example using the setup_function keyword
- >>> def print_time(pf):
- ... print pf.current_time
+ >>> def print_time(ds):
+ ... print ds.current_time
>>> es.get_time_series(setup_function=print_time)
- >>> for pf in es:
- ... SlicePlot(pf, "x", "Density").save()
+ >>> for ds in es:
+ ... SlicePlot(ds, "x", "Density").save()
"""
@@ -530,7 +530,7 @@
def _find_outputs(self):
"""
Search for directories matching the data dump keywords.
- If found, get dataset times py opening the pf.
+ If found, get dataset times py opening the ds.
"""
# look for time outputs.
@@ -580,12 +580,12 @@
"%s%s" % (output_key, index))
if os.path.exists(filename):
try:
- pf = load(filename)
- if pf is not None:
+ ds = load(filename)
+ if ds is not None:
my_storage.result = {'filename': filename,
- 'time': pf.current_time.in_units("s")}
- if pf.cosmological_simulation:
- my_storage.result['redshift'] = pf.current_redshift
+ 'time': ds.current_time.in_units("s")}
+ if ds.cosmological_simulation:
+ my_storage.result['redshift'] = ds.current_redshift
except YTOutputNotIdentified:
mylog.error('Failed to load %s', filename)
my_outputs = [my_output for my_output in my_outputs.values() \
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/enzo/tests/test_outputs.py
--- a/yt/frontends/enzo/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/enzo/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -24,9 +24,9 @@
_fields = ("temperature", "density", "velocity_magnitude",
"velocity_divergence")
-def check_color_conservation(pf):
- species_names = pf.field_info.species_names
- dd = pf.all_data()
+def check_color_conservation(ds):
+ species_names = ds.field_info.species_names
+ dd = ds.all_data()
dens_yt = dd["density"].copy()
# Enumerate our species here
for s in sorted(species_names):
@@ -36,9 +36,9 @@
delta_yt = np.abs(dens_yt / dd["density"])
# Now we compare color conservation to Enzo's color conservation
- dd = pf.all_data()
+ dd = ds.all_data()
dens_enzo = dd["Density"].copy()
- for f in sorted(pf.field_list):
+ for f in sorted(ds.field_list):
if not f[1].endswith("_Density") or \
f[1].startswith("Dark_Matter_") or \
f[1].startswith("Electron_") or \
@@ -51,27 +51,27 @@
return assert_almost_equal, delta_yt, delta_enzo
m7 = "DD0010/moving7_0010"
-@requires_pf(m7)
+@requires_ds(m7)
def test_moving7():
- pf = data_dir_load(m7)
- yield assert_equal, str(pf), "moving7_0010"
+ ds = data_dir_load(m7)
+ yield assert_equal, str(ds), "moving7_0010"
for test in small_patch_amr(m7, _fields):
test_moving7.__name__ = test.description
yield test
g30 = "IsolatedGalaxy/galaxy0030/galaxy0030"
-@requires_pf(g30, big_data=True)
+@requires_ds(g30, big_data=True)
def test_galaxy0030():
- pf = data_dir_load(g30)
- yield check_color_conservation(pf)
- yield assert_equal, str(pf), "galaxy0030"
+ ds = data_dir_load(g30)
+ yield check_color_conservation(ds)
+ yield assert_equal, str(ds), "galaxy0030"
for test in big_patch_amr(g30, _fields):
test_galaxy0030.__name__ = test.description
yield test
ecp = "enzo_cosmology_plus/DD0046/DD0046"
-@requires_pf(ecp, big_data=True)
+@requires_ds(ecp, big_data=True)
def test_ecp():
- pf = data_dir_load(ecp)
+ ds = data_dir_load(ecp)
# Now we test our species fields
- yield check_color_conservation(pf)
+ yield check_color_conservation(ds)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/fits/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -72,16 +72,16 @@
grid = FITSGrid
- def __init__(self,pf,dataset_type='fits'):
+ def __init__(self,ds,dataset_type='fits'):
self.dataset_type = dataset_type
self.field_indexes = {}
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ # for now, the index file is the dataset!
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
- self._handle = pf._handle
+ self._handle = ds._handle
self.float_type = np.float64
- GridIndex.__init__(self,pf,dataset_type)
+ GridIndex.__init__(self,ds,dataset_type)
def _initialize_data_storage(self):
pass
@@ -112,7 +112,7 @@
return "dimensionless"
def _ensure_same_dims(self, hdu):
- ds = self.parameter_file
+ ds = self.dataset
conditions = [hdu.header["naxis"] != ds.primary_header["naxis"]]
for i in xrange(ds.naxis):
nax = "naxis%d" % (i+1)
@@ -123,7 +123,7 @@
return True
def _detect_output_fields(self):
- ds = self.parameter_file
+ ds = self.dataset
self.field_list = []
if ds.events_data:
for k,v in ds.events_info.items():
@@ -134,7 +134,7 @@
unit = "code_length"
else:
unit = v
- self.parameter_file.field_units[("io",fname)] = unit
+ self.dataset.field_units[("io",fname)] = unit
return
self._axis_map = {}
self._file_map = {}
@@ -143,17 +143,17 @@
dup_field_index = {}
# Since FITS header keywords are case-insensitive, we only pick a subset of
# prefixes, ones that we expect to end up in headers.
- known_units = dict([(unit.lower(),unit) for unit in self.pf.unit_registry.lut])
+ known_units = dict([(unit.lower(),unit) for unit in self.ds.unit_registry.lut])
for unit in known_units.values():
if unit in prefixable_units:
for p in ["n","u","m","c","k"]:
known_units[(p+unit).lower()] = p+unit
# We create a field from each slice on the 4th axis
- if self.parameter_file.naxis == 4:
- naxis4 = self.parameter_file.primary_header["naxis4"]
+ if self.dataset.naxis == 4:
+ naxis4 = self.dataset.primary_header["naxis4"]
else:
naxis4 = 1
- for i, fits_file in enumerate(self.parameter_file._fits_files):
+ for i, fits_file in enumerate(self.dataset._fits_files):
for j, hdu in enumerate(fits_file):
if self._ensure_same_dims(hdu):
units = self._determine_image_units(hdu.header, known_units)
@@ -165,7 +165,7 @@
fname = self._guess_name_from_units(units)
# When all else fails
if fname is None: fname = "image_%d" % (j)
- if self.pf.num_files > 1 and fname.startswith("image"):
+ if self.ds.num_files > 1 and fname.startswith("image"):
fname += "_file_%d" % (i)
if ("fits", fname) in self.field_list:
if fname in dup_field_index:
@@ -189,7 +189,7 @@
if "bscale" in hdu.header:
self._scale_map[fname][1] = hdu.header["bscale"]
self.field_list.append(("fits", fname))
- self.parameter_file.field_units[fname] = units
+ self.dataset.field_units[fname] = units
mylog.info("Adding field %s to the list of fields." % (fname))
if units == "dimensionless":
mylog.warning("Could not determine dimensions for field %s, " % (fname) +
@@ -201,7 +201,7 @@
# For line fields, we still read the primary field. Not sure how to extend this
# For now, we pick off the first field from the field list.
- line_db = self.parameter_file.line_database
+ line_db = self.dataset.line_database
primary_fname = self.field_list[0][1]
for k, v in line_db.iteritems():
mylog.info("Adding line field: %s at frequency %g GHz" % (k, v))
@@ -209,54 +209,54 @@
self._ext_map[k] = self._ext_map[primary_fname]
self._axis_map[k] = self._axis_map[primary_fname]
self._file_map[k] = self._file_map[primary_fname]
- self.parameter_file.field_units[k] = self.parameter_file.field_units[primary_fname]
+ self.dataset.field_units[k] = self.dataset.field_units[primary_fname]
def _count_grids(self):
- self.num_grids = self.pf.parameters["nprocs"]
+ self.num_grids = self.ds.parameters["nprocs"]
def _parse_index(self):
f = self._handle # shortcut
- pf = self.parameter_file # shortcut
+ ds = self.dataset # shortcut
# If nprocs > 1, decompose the domain into virtual grids
if self.num_grids > 1:
- if self.pf.z_axis_decomp:
- dz = pf.quan(1.0, "code_length")*pf.spectral_factor
- self.grid_dimensions[:,2] = np.around(float(pf.domain_dimensions[2])/
+ if self.ds.z_axis_decomp:
+ dz = ds.quan(1.0, "code_length")*ds.spectral_factor
+ self.grid_dimensions[:,2] = np.around(float(ds.domain_dimensions[2])/
self.num_grids).astype("int")
- self.grid_dimensions[-1,2] += (pf.domain_dimensions[2] % self.num_grids)
- self.grid_left_edge[0,2] = pf.domain_left_edge[2]
- self.grid_left_edge[1:,2] = pf.domain_left_edge[2] + \
+ self.grid_dimensions[-1,2] += (ds.domain_dimensions[2] % self.num_grids)
+ self.grid_left_edge[0,2] = ds.domain_left_edge[2]
+ self.grid_left_edge[1:,2] = ds.domain_left_edge[2] + \
np.cumsum(self.grid_dimensions[:-1,2])*dz
self.grid_right_edge[:,2] = self.grid_left_edge[:,2]+self.grid_dimensions[:,2]*dz
- self.grid_left_edge[:,:2] = pf.domain_left_edge[:2]
- self.grid_right_edge[:,:2] = pf.domain_right_edge[:2]
- self.grid_dimensions[:,:2] = pf.domain_dimensions[:2]
+ self.grid_left_edge[:,:2] = ds.domain_left_edge[:2]
+ self.grid_right_edge[:,:2] = ds.domain_right_edge[:2]
+ self.grid_dimensions[:,:2] = ds.domain_dimensions[:2]
else:
- bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
- pf.domain_right_edge)])
- dims = np.array(pf.domain_dimensions)
+ bbox = np.array([[le,re] for le, re in zip(ds.domain_left_edge,
+ ds.domain_right_edge)])
+ dims = np.array(ds.domain_dimensions)
# If we are creating a dataset of lines, only decompose along the position axes
- if len(pf.line_database) > 0:
- dims[pf.spec_axis] = 1
+ if len(ds.line_database) > 0:
+ dims[ds.spec_axis] = 1
psize = get_psize(dims, self.num_grids)
gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
- self.grid_left_edge = self.pf.arr(gle, "code_length")
- self.grid_right_edge = self.pf.arr(gre, "code_length")
+ self.grid_left_edge = self.ds.arr(gle, "code_length")
+ self.grid_right_edge = self.ds.arr(gre, "code_length")
self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
# If we are creating a dataset of lines, only decompose along the position axes
- if len(pf.line_database) > 0:
- self.grid_left_edge[:,pf.spec_axis] = pf.domain_left_edge[pf.spec_axis]
- self.grid_right_edge[:,pf.spec_axis] = pf.domain_right_edge[pf.spec_axis]
- self.grid_dimensions[:,pf.spec_axis] = pf.domain_dimensions[pf.spec_axis]
+ if len(ds.line_database) > 0:
+ self.grid_left_edge[:,ds.spec_axis] = ds.domain_left_edge[ds.spec_axis]
+ self.grid_right_edge[:,ds.spec_axis] = ds.domain_right_edge[ds.spec_axis]
+ self.grid_dimensions[:,ds.spec_axis] = ds.domain_dimensions[ds.spec_axis]
else:
- self.grid_left_edge[0,:] = pf.domain_left_edge
- self.grid_right_edge[0,:] = pf.domain_right_edge
- self.grid_dimensions[0] = pf.domain_dimensions
+ self.grid_left_edge[0,:] = ds.domain_left_edge
+ self.grid_right_edge[0,:] = ds.domain_right_edge
+ self.grid_dimensions[0] = ds.domain_dimensions
- if pf.events_data:
+ if ds.events_data:
try:
- self.grid_particle_count[:] = pf.primary_header["naxis2"]
+ self.grid_particle_count[:] = ds.primary_header["naxis2"]
except KeyError:
self.grid_particle_count[:] = 0.0
self._particle_indices = np.zeros(self.num_grids + 1, dtype='int64')
@@ -275,20 +275,20 @@
def _setup_derived_fields(self):
super(FITSHierarchy, self)._setup_derived_fields()
- [self.parameter_file.conversion_factors[field]
+ [self.dataset.conversion_factors[field]
for field in self.field_list]
for field in self.field_list:
if field not in self.derived_field_list:
self.derived_field_list.append(field)
for field in self.derived_field_list:
- f = self.parameter_file.field_info[field]
+ f = self.dataset.field_info[field]
if f._function.func_name == "_TranslationFunc":
# Translating an already-converted field
- self.parameter_file.conversion_factors[field] = 1.0
+ self.dataset.conversion_factors[field] = 1.0
def _setup_data_io(self):
- self.io = io_registry[self.dataset_type](self.parameter_file)
+ self.io = io_registry[self.dataset_type](self.dataset)
def _chunk_io(self, dobj, cache = True, local_only = False):
# local_only is only useful for inline datasets and requires
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/fits/fields.py
--- a/yt/frontends/fits/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/fits/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -18,42 +18,42 @@
class FITSFieldInfo(FieldInfoContainer):
known_other_fields = ()
- def __init__(self, pf, field_list, slice_info=None):
- super(FITSFieldInfo, self).__init__(pf, field_list, slice_info=slice_info)
- for field in pf.field_list:
+ def __init__(self, ds, field_list, slice_info=None):
+ super(FITSFieldInfo, self).__init__(ds, field_list, slice_info=slice_info)
+ for field in ds.field_list:
if field[0] == "fits": self[field].take_log = False
def _setup_spec_cube_fields(self):
def _get_2d_wcs(data, axis):
- w_coords = data.pf.wcs_2d.wcs_pix2world(data["x"], data["y"], 1)
+ w_coords = data.ds.wcs_2d.wcs_pix2world(data["x"], data["y"], 1)
return w_coords[axis]
def world_f(axis, unit):
def _world_f(field, data):
- return data.pf.arr(_get_2d_wcs(data, axis), unit)
+ return data.ds.arr(_get_2d_wcs(data, axis), unit)
return _world_f
- for (i, axis), name in zip(enumerate([self.pf.lon_axis, self.pf.lat_axis]),
- [self.pf.lon_name, self.pf.lat_name]):
- unit = str(self.pf.wcs_2d.wcs.cunit[i])
+ for (i, axis), name in zip(enumerate([self.ds.lon_axis, self.ds.lat_axis]),
+ [self.ds.lon_name, self.ds.lat_name]):
+ unit = str(self.ds.wcs_2d.wcs.cunit[i])
if unit.lower() == "deg": unit = "degree"
if unit.lower() == "rad": unit = "radian"
self.add_field(("fits",name), function=world_f(axis, unit), units=unit)
- if self.pf.dimensionality == 3:
+ if self.ds.dimensionality == 3:
def _spec(field, data):
- axis = "xyz"[data.pf.spec_axis]
- sp = (data[axis].ndarray_view()-self.pf._p0)*self.pf._dz + self.pf._z0
- return data.pf.arr(sp, data.pf.spec_unit)
+ axis = "xyz"[data.ds.spec_axis]
+ sp = (data[axis].ndarray_view()-self.ds._p0)*self.ds._dz + self.ds._z0
+ return data.ds.arr(sp, data.ds.spec_unit)
self.add_field(("fits","spectral"), function=_spec,
- units=self.pf.spec_unit, display_name=self.pf.spec_name)
+ units=self.ds.spec_unit, display_name=self.ds.spec_name)
def setup_fluid_fields(self):
- if self.pf.spec_cube:
+ if self.ds.spec_cube:
def _pixel(field, data):
- return data.pf.arr(data["ones"], "pixel")
+ return data.ds.arr(data["ones"], "pixel")
self.add_field(("fits","pixel"), function=_pixel, units="pixel")
self._setup_spec_cube_fields()
return
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/fits/io.py
--- a/yt/frontends/fits/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/fits/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,13 +20,13 @@
_particle_reader = False
_dataset_type = "fits"
- def __init__(self, pf):
- super(IOHandlerFITS, self).__init__(pf)
- self.pf = pf
- self._handle = pf._handle
- if self.pf.line_width is not None:
- self.line_db = self.pf.line_database
- self.dz = self.pf.line_width/self.domain_dimensions[self.pf.spec_axis]
+ def __init__(self, ds):
+ super(IOHandlerFITS, self).__init__(ds)
+ self.ds = ds
+ self._handle = ds._handle
+ if self.ds.line_width is not None:
+ self.line_db = self.ds.line_database
+ self.dz = self.ds.line_width/self.domain_dimensions[self.ds.spec_axis]
else:
self.line_db = None
self.dz = 1.
@@ -36,33 +36,33 @@
pass
def _read_particle_coords(self, chunks, ptf):
- pdata = self.pf._handle[self.pf.first_image].data
+ pdata = self.ds._handle[self.ds.first_image].data
assert(len(ptf) == 1)
ptype = ptf.keys()[0]
x = np.asarray(pdata.field("X"), dtype="=f8")
y = np.asarray(pdata.field("Y"), dtype="=f8")
z = np.ones(x.shape)
- x = (x-0.5)/self.pf.reblock+0.5
- y = (y-0.5)/self.pf.reblock+0.5
+ x = (x-0.5)/self.ds.reblock+0.5
+ y = (y-0.5)/self.ds.reblock+0.5
yield ptype, (x,y,z)
def _read_particle_fields(self, chunks, ptf, selector):
- pdata = self.pf._handle[self.pf.first_image].data
+ pdata = self.ds._handle[self.ds.first_image].data
assert(len(ptf) == 1)
ptype = ptf.keys()[0]
field_list = ptf[ptype]
x = np.asarray(pdata.field("X"), dtype="=f8")
y = np.asarray(pdata.field("Y"), dtype="=f8")
z = np.ones(x.shape)
- x = (x-0.5)/self.pf.reblock+0.5
- y = (y-0.5)/self.pf.reblock+0.5
+ x = (x-0.5)/self.ds.reblock+0.5
+ y = (y-0.5)/self.ds.reblock+0.5
mask = selector.select_points(x, y, z, 0.0)
if mask is None: return
for field in field_list:
fd = field.split("_")[-1]
data = pdata.field(fd.upper())
if fd in ["x","y"]:
- data = (data.copy()-0.5)/self.pf.reblock+0.5
+ data = (data.copy()-0.5)/self.ds.reblock+0.5
yield (ptype, field), data[mask]
def _read_fluid_selection(self, chunks, selector, fields, size):
@@ -76,56 +76,56 @@
ng = sum(len(c.objs) for c in chunks)
mylog.debug("Reading %s cells of %s fields in %s grids",
size, [f2 for f1, f2 in fields], ng)
- dx = self.pf.domain_width/self.pf.domain_dimensions
+ dx = self.ds.domain_width/self.ds.domain_dimensions
for field in fields:
ftype, fname = field
tmp_fname = fname
- if fname in self.pf.line_database:
- fname = self.pf.field_list[0][1]
- f = self.pf.index._file_map[fname]
- ds = f[self.pf.index._ext_map[fname]]
- bzero, bscale = self.pf.index._scale_map[fname]
+ if fname in self.ds.line_database:
+ fname = self.ds.field_list[0][1]
+ f = self.ds.index._file_map[fname]
+ ds = f[self.ds.index._ext_map[fname]]
+ bzero, bscale = self.ds.index._scale_map[fname]
fname = tmp_fname
ind = 0
for chunk in chunks:
for g in chunk.objs:
- start = ((g.LeftEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
+ start = ((g.LeftEdge-self.ds.domain_left_edge)/dx).to_ndarray().astype("int")
end = start + g.ActiveDimensions
if self.line_db is not None and fname in self.line_db:
- my_off = self.line_db.get(fname).in_units(self.pf.spec_unit).value
- my_off = my_off - 0.5*self.pf.line_width
- my_off = int((my_off-self.pf.freq_begin)/self.dz)
+ my_off = self.line_db.get(fname).in_units(self.ds.spec_unit).value
+ my_off = my_off - 0.5*self.ds.line_width
+ my_off = int((my_off-self.ds.freq_begin)/self.dz)
my_off = max(my_off, 0)
- my_off = min(my_off, self.pf.dims[self.pf.spec_axis]-1)
- start[self.pf.spec_axis] += my_off
- end[self.pf.spec_axis] += my_off
+ my_off = min(my_off, self.ds.dims[self.ds.spec_axis]-1)
+ start[self.ds.spec_axis] += my_off
+ end[self.ds.spec_axis] += my_off
mylog.debug("Reading from " + str(start) + str(end))
slices = [slice(start[i],end[i]) for i in xrange(3)]
- if self.pf.reversed:
- new_start = self.pf.dims[self.pf.spec_axis]-1-start[self.pf.spec_axis]
- new_end = max(self.pf.dims[self.pf.spec_axis]-1-end[self.pf.spec_axis],0)
- slices[self.pf.spec_axis] = slice(new_start,new_end,-1)
- if self.pf.dimensionality == 2:
+ if self.ds.reversed:
+ new_start = self.ds.dims[self.ds.spec_axis]-1-start[self.ds.spec_axis]
+ new_end = max(self.ds.dims[self.ds.spec_axis]-1-end[self.ds.spec_axis],0)
+ slices[self.ds.spec_axis] = slice(new_start,new_end,-1)
+ if self.ds.dimensionality == 2:
nx, ny = g.ActiveDimensions[:2]
nz = 1
data = np.zeros((nx,ny,nz))
data[:,:,0] = ds.data[slices[1],slices[0]].transpose()
- elif self.pf.naxis == 4:
- idx = self.pf.index._axis_map[fname]
+ elif self.ds.naxis == 4:
+ idx = self.ds.index._axis_map[fname]
data = ds.data[idx,slices[2],slices[1],slices[0]].transpose()
else:
data = ds.data[slices[2],slices[1],slices[0]].transpose()
if self.line_db is not None:
- nz1 = data.shape[self.pf.spec_axis]
- nz2 = g.ActiveDimensions[self.pf.spec_axis]
+ nz1 = data.shape[self.ds.spec_axis]
+ nz2 = g.ActiveDimensions[self.ds.spec_axis]
if nz1 != nz2:
old_data = data.copy()
data = np.zeros(g.ActiveDimensions)
data[:,:,nz2-nz1:] = old_data
- if fname in self.pf.nan_mask:
- data[np.isnan(data)] = self.pf.nan_mask[fname]
- elif "all" in self.pf.nan_mask:
- data[np.isnan(data)] = self.pf.nan_mask["all"]
+ if fname in self.ds.nan_mask:
+ data[np.isnan(data)] = self.ds.nan_mask[fname]
+ elif "all" in self.ds.nan_mask:
+ data[np.isnan(data)] = self.ds.nan_mask["all"]
data = bzero + bscale*data
ind += g.select(selector, data.astype("float64"), rv[field], ind)
return rv
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/fits/misc.py
--- a/yt/frontends/fits/misc.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/fits/misc.py Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
if sigma is not None and sigma > 0.0:
kern = _astropy.conv.Gaussian2DKernel(stddev=sigma)
img[:,:,0] = _astropy.conv.convolve(img[:,:,0], kern)
- return data.pf.arr(img, "counts/pixel")
+ return data.ds.arr(img, "counts/pixel")
return _counts
def setup_counts_fields(ds, ebounds, ftype="gas"):
@@ -129,9 +129,9 @@
from wcsaxes import WCSAxes
if pw.oblique:
raise NotImplementedError("WCS axes are not implemented for oblique plots.")
- if not hasattr(pw.pf, "wcs_2d"):
+ if not hasattr(pw.ds, "wcs_2d"):
raise NotImplementedError("WCS axes are not implemented for this dataset.")
- if pw.data_source.axis != pw.pf.spec_axis:
+ if pw.data_source.axis != pw.ds.spec_axis:
raise NotImplementedError("WCS axes are not implemented for this axis.")
self.plots = {}
self.pw = pw
@@ -139,11 +139,11 @@
rect = pw.plots[f]._get_best_layout()[1]
fig = pw.plots[f].figure
ax = fig.axes[0]
- wcs_ax = WCSAxes(fig, rect, wcs=pw.pf.wcs_2d, frameon=False)
+ wcs_ax = WCSAxes(fig, rect, wcs=pw.ds.wcs_2d, frameon=False)
fig.add_axes(wcs_ax)
- wcs = pw.pf.wcs_2d.wcs
- xax = pw.pf.coordinates.x_axis[pw.data_source.axis]
- yax = pw.pf.coordinates.y_axis[pw.data_source.axis]
+ wcs = pw.ds.wcs_2d.wcs
+ xax = pw.ds.coordinates.x_axis[pw.data_source.axis]
+ yax = pw.ds.coordinates.y_axis[pw.data_source.axis]
xlabel = "%s (%s)" % (wcs.ctype[xax].split("-")[0],
wcs.cunit[xax])
ylabel = "%s (%s)" % (wcs.ctype[yax].split("-")[0],
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/fits/tests/test_outputs.py
--- a/yt/frontends/fits/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/fits/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -23,10 +23,10 @@
_fields = ("intensity")
m33 = "radio_fits/m33_hi.fits"
-@requires_pf(m33, big_data=True)
+@requires_ds(m33, big_data=True)
def test_m33():
- pf = data_dir_load(m33, nan_mask=0.0)
- yield assert_equal, str(pf), "m33_hi.fits"
+ ds = data_dir_load(m33, nan_mask=0.0)
+ yield assert_equal, str(ds), "m33_hi.fits"
for test in small_patch_amr(m33, _fields):
test_m33.__name__ = test.description
yield test
@@ -34,10 +34,10 @@
_fields = ("temperature")
grs = "radio_fits/grs-50-cube.fits"
-@requires_pf(grs)
+@requires_ds(grs)
def test_grs():
- pf = data_dir_load(grs, nan_mask=0.0)
- yield assert_equal, str(pf), "grs-50-cube.fits"
+ ds = data_dir_load(grs, nan_mask=0.0)
+ yield assert_equal, str(ds), "grs-50-cube.fits"
for test in small_patch_amr(grs, _fields):
test_grs.__name__ = test.description
yield test
@@ -45,10 +45,10 @@
_fields = ("x-velocity","y-velocity","z-velocity")
vf = "UniformGrid/velocity_field_20.fits"
-@requires_pf(vf)
+@requires_ds(vf)
def test_velocity_field():
- pf = data_dir_load(bf)
- yield assert_equal, str(pf), "velocity_field_20.fits"
+ ds = data_dir_load(bf)
+ yield assert_equal, str(ds), "velocity_field_20.fits"
for test in small_patch_amr(vf, _fields):
test_velocity_field.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/flash/data_structures.py
--- a/yt/frontends/flash/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/flash/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -53,17 +53,17 @@
grid = FLASHGrid
_preload_implemented = True
- def __init__(self,pf,dataset_type='flash_hdf5'):
+ def __init__(self,ds,dataset_type='flash_hdf5'):
self.dataset_type = dataset_type
self.field_indexes = {}
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ # for now, the index file is the dataset!
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
- self._handle = pf._handle
- self._particle_handle = pf._particle_handle
+ self._handle = ds._handle
+ self._particle_handle = ds._particle_handle
self.float_type = np.float64
- GridIndex.__init__(self,pf,dataset_type)
+ GridIndex.__init__(self,ds,dataset_type)
def _initialize_data_storage(self):
pass
@@ -77,20 +77,20 @@
def _count_grids(self):
try:
- self.num_grids = self.parameter_file._find_parameter(
+ self.num_grids = self.dataset._find_parameter(
"integer", "globalnumblocks", True)
except KeyError:
self.num_grids = self._handle["/simulation parameters"][0][0]
def _parse_index(self):
f = self._handle # shortcut
- pf = self.parameter_file # shortcut
+ ds = self.dataset # shortcut
f_part = self._particle_handle # shortcut
# Initialize to the domain left / domain right
- ND = self.parameter_file.dimensionality
- DLE = self.parameter_file.domain_left_edge
- DRE = self.parameter_file.domain_right_edge
+ ND = self.dataset.dimensionality
+ DLE = self.dataset.domain_left_edge
+ DRE = self.dataset.domain_right_edge
for i in range(3):
self.grid_left_edge[:,i] = DLE[i]
self.grid_right_edge[:,i] = DRE[i]
@@ -99,9 +99,9 @@
self.grid_right_edge[:,:ND] = f["/bounding box"][:,:ND,1]
# Move this to the parameter file
try:
- nxb = pf.parameters['nxb']
- nyb = pf.parameters['nyb']
- nzb = pf.parameters['nzb']
+ nxb = ds.parameters['nxb']
+ nyb = ds.parameters['nyb']
+ nzb = ds.parameters['nzb']
except KeyError:
nxb, nyb, nzb = [int(f["/simulation parameters"]['n%sb' % ax])
for ax in 'xyz']
@@ -126,12 +126,12 @@
# This is a possibly slow and verbose fix, and should be re-examined!
- rdx = (self.parameter_file.domain_width /
- self.parameter_file.domain_dimensions)
+ rdx = (self.dataset.domain_width /
+ self.dataset.domain_dimensions)
nlevels = self.grid_levels.max()
dxs = np.ones((nlevels+1,3),dtype='float64')
for i in range(nlevels+1):
- dxs[i,:ND] = rdx[:ND]/self.parameter_file.refine_by**i
+ dxs[i,:ND] = rdx[:ND]/self.dataset.refine_by**i
if ND < 3:
dxs[:,ND:] = rdx[ND:]
@@ -150,7 +150,7 @@
offset = 7
ii = np.argsort(self.grid_levels.flat)
gid = self._handle["/gid"][:]
- first_ind = -(self.parameter_file.refine_by**self.parameter_file.dimensionality)
+ first_ind = -(self.dataset.refine_by**self.dataset.dimensionality)
for g in self.grids[ii].flat:
gi = g.id - g._id_offset
# FLASH uses 1-indexed group info
@@ -159,14 +159,14 @@
g1.Parent = g
g._prepare_grid()
g._setup_dx()
- if self.parameter_file.dimensionality < 3:
- DD = (self.parameter_file.domain_right_edge[2] -
- self.parameter_file.domain_left_edge[2])
+ if self.dataset.dimensionality < 3:
+ DD = (self.dataset.domain_right_edge[2] -
+ self.dataset.domain_left_edge[2])
for g in self.grids:
g.dds[2] = DD
- if self.parameter_file.dimensionality < 2:
- DD = (self.parameter_file.domain_right_edge[1] -
- self.parameter_file.domain_left_edge[1])
+ if self.dataset.dimensionality < 2:
+ DD = (self.dataset.domain_right_edge[1] -
+ self.dataset.domain_left_edge[1])
for g in self.grids:
g.dds[1] = DD
self.max_level = self.grid_levels.max()
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/flash/fields.py
--- a/yt/frontends/flash/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/flash/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -106,9 +106,9 @@
# Add energy fields
def ekin(data):
ek = data["flash","velx"]**2
- if data.pf.dimensionality >= 2:
+ if data.ds.dimensionality >= 2:
ek += data["flash","vely"]**2
- if data.pf.dimensionality == 3:
+ if data.ds.dimensionality == 3:
ek += data["flash","velz"]**2
return 0.5*ek
if ("flash","ener") in self.field_list:
@@ -143,12 +143,12 @@
units="erg/g")
## Derived FLASH Fields
def _nele(field, data):
- Na_code = data.pf.quan(Na, '1/code_mass')
+ Na_code = data.ds.quan(Na, '1/code_mass')
return data["flash","dens"]*data["flash","ye"]*Na_code
self.add_field(('flash','nele'), function=_nele, units="code_length**-3")
self.add_field(('flash','edens'), function=_nele, units="code_length**-3")
def _nion(field, data):
- Na_code = data.pf.quan(Na, '1/code_mass')
+ Na_code = data.ds.quan(Na, '1/code_mass')
return data["flash","dens"]*data["flash","sumy"]*Na_code
self.add_field(('flash','nion'), function=_nion, units="code_length**-3")
def _abar(field, data):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/flash/io.py
--- a/yt/frontends/flash/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/flash/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -39,11 +39,11 @@
_particle_reader = False
_dataset_type = "flash_hdf5"
- def __init__(self, pf):
- super(IOHandlerFLASH, self).__init__(pf)
+ def __init__(self, ds):
+ super(IOHandlerFLASH, self).__init__(ds)
# Now we cache the particle fields
- self._handle = pf._handle
- self._particle_handle = pf._particle_handle
+ self._handle = ds._handle
+ self._particle_handle = ds._particle_handle
try :
particle_fields = [s[0].strip() for s in
@@ -60,7 +60,7 @@
def _read_particle_coords(self, chunks, ptf):
chunks = list(chunks)
f_part = self._particle_handle
- p_ind = self.pf.index._particle_indices
+ p_ind = self.ds.index._particle_indices
px, py, pz = (self._particle_fields["particle_pos%s" % ax]
for ax in 'xyz')
p_fields = f_part["/tracer particles"]
@@ -79,7 +79,7 @@
def _read_particle_fields(self, chunks, ptf, selector):
chunks = list(chunks)
f_part = self._particle_handle
- p_ind = self.pf.index._particle_indices
+ p_ind = self.ds.index._particle_indices
px, py, pz = (self._particle_fields["particle_pos%s" % ax]
for ax in 'xyz')
p_fields = f_part["/tracer particles"]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/flash/tests/test_outputs.py
--- a/yt/frontends/flash/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/flash/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -15,7 +15,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load
@@ -24,10 +24,10 @@
_fields = ("temperature", "density", "velocity_magnitude", "velocity_divergence")
sloshing = "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0300"
-@requires_pf(sloshing, big_data=True)
+@requires_ds(sloshing, big_data=True)
def test_sloshing():
- pf = data_dir_load(sloshing)
- yield assert_equal, str(pf), "sloshing_low_res_hdf5_plt_cnt_0300"
+ ds = data_dir_load(sloshing)
+ yield assert_equal, str(ds), "sloshing_low_res_hdf5_plt_cnt_0300"
for test in small_patch_amr(sloshing, _fields):
test_sloshing.__name__ = test.description
yield test
@@ -35,10 +35,10 @@
_fields_2d = ("temperature", "density")
wt = "WindTunnel/windtunnel_4lev_hdf5_plt_cnt_0030"
-@requires_pf(wt)
+@requires_ds(wt)
def test_wind_tunnel():
- pf = data_dir_load(wt)
- yield assert_equal, str(pf), "windtunnel_4lev_hdf5_plt_cnt_0030"
+ ds = data_dir_load(wt)
+ yield assert_equal, str(ds), "windtunnel_4lev_hdf5_plt_cnt_0030"
for test in small_patch_amr(wt, _fields_2d):
test_wind_tunnel.__name__ = test.description
yield test
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/gdf/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -51,15 +51,15 @@
# that dx=dy=dz , at least here. We probably do elsewhere.
id = self.id - self._id_offset
if len(self.Parent) > 0:
- self.dds = self.Parent[0].dds / self.pf.refine_by
+ self.dds = self.Parent[0].dds / self.ds.refine_by
else:
LE, RE = self.index.grid_left_edge[id, :], \
self.index.grid_right_edge[id, :]
self.dds = np.array((RE - LE) / self.ActiveDimensions)
- if self.pf.data_software != "piernik":
- if self.pf.dimensionality < 2:
+ if self.ds.data_software != "piernik":
+ if self.ds.dimensionality < 2:
self.dds[1] = 1.0
- if self.pf.dimensionality < 3:
+ if self.ds.dimensionality < 3:
self.dds[2] = 1.0
self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = \
self.dds
@@ -69,14 +69,13 @@
grid = GDFGrid
- def __init__(self, pf, dataset_type='grid_data_format'):
- self.parameter_file = weakref.proxy(pf)
- self.index_filename = self.parameter_file.parameter_filename
+ def __init__(self, ds, dataset_type='grid_data_format'):
+ self.dataset = weakref.proxy(ds)
+ self.index_filename = self.dataset.parameter_filename
h5f = h5py.File(self.index_filename, 'r')
self.dataset_type = dataset_type
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
self.max_level = 10 # FIXME
- # for now, the index file is the parameter file!
self.directory = os.path.dirname(self.index_filename)
h5f.close()
@@ -98,7 +97,7 @@
glis = (h5f['grid_left_index'][:]).copy()
gdims = (h5f['grid_dimensions'][:]).copy()
active_dims = ~((np.max(gdims, axis=0) == 1) &
- (self.parameter_file.domain_dimensions == 1))
+ (self.dataset.domain_dimensions == 1))
for i in range(levels.shape[0]):
self.grids[i] = self.grid(i, self, levels[i],
@@ -106,13 +105,13 @@
gdims[i])
self.grids[i]._level_id = levels[i]
- dx = (self.parameter_file.domain_right_edge -
- self.parameter_file.domain_left_edge) / \
- self.parameter_file.domain_dimensions
- dx[active_dims] /= self.parameter_file.refine_by ** levels[i]
+ dx = (self.dataset.domain_right_edge -
+ self.dataset.domain_left_edge) / \
+ self.dataset.domain_dimensions
+ dx[active_dims] /= self.dataset.refine_by ** levels[i]
dxs.append(dx.in_units("code_length"))
- dx = self.parameter_file.arr(dxs, input_units="code_length")
- self.grid_left_edge = self.parameter_file.domain_left_edge + dx * glis
+ dx = self.dataset.arr(dxs, input_units="code_length")
+ self.grid_left_edge = self.dataset.domain_left_edge + dx * glis
self.grid_dimensions = gdims.astype("int32")
self.grid_right_edge = self.grid_left_edge + dx * self.grid_dimensions
self.grid_particle_count = h5f['grid_particle_count'][:]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/gdf/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -46,7 +46,7 @@
h5f = h5py.File(grid.filename, 'r')
gds = h5f.get(_grid_dname(grid.id))
for ftype, fname in fields:
- if self.pf.field_ordering == 1:
+ if self.ds.field_ordering == 1:
rv[(ftype, fname)] = gds.get(fname).value.swapaxes(0, 2)
else:
rv[(ftype, fname)] = gds.get(fname).value
@@ -75,7 +75,7 @@
continue
if fid is None:
fid = h5py.h5f.open(grid.filename, h5py.h5f.ACC_RDONLY)
- if self.pf.field_ordering == 1:
+ if self.ds.field_ordering == 1:
# check the dtype instead
data = np.empty(grid.ActiveDimensions[::-1],
dtype="float64")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/halo_catalog/data_structures.py
--- a/yt/frontends/halo_catalogs/halo_catalog/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/halo_catalog/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -38,12 +38,12 @@
YTQuantity
class HaloCatalogHDF5File(ParticleFile):
- def __init__(self, pf, io, filename, file_id):
+ def __init__(self, ds, io, filename, file_id):
with h5py.File(filename, "r") as f:
self.header = dict((field, f.attrs[field]) \
for field in f.attrs.keys())
- super(HaloCatalogHDF5File, self).__init__(pf, io, filename, file_id)
+ super(HaloCatalogHDF5File, self).__init__(ds, io, filename, file_id)
class HaloCatalogDataset(Dataset):
_index_class = ParticleIndex
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/halo_catalog/io.py
--- a/yt/frontends/halo_catalogs/halo_catalog/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/halo_catalog/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -84,28 +84,28 @@
with h5py.File(data_file.filename, "r") as f:
if not f.keys(): return None
pos = np.empty((pcount, 3), dtype="float64")
- pos = data_file.pf.arr(pos, "code_length")
+ pos = data_file.ds.arr(pos, "code_length")
dx = np.finfo(f['particle_position_x'].dtype).eps
- dx = 2.0*self.pf.quan(dx, "code_length")
+ dx = 2.0*self.ds.quan(dx, "code_length")
pos[:,0] = f["particle_position_x"].value
pos[:,1] = f["particle_position_y"].value
pos[:,2] = f["particle_position_z"].value
# These are 32 bit numbers, so we give a little lee-way.
# Otherwise, for big sets of particles, we often will bump into the
# domain edges. This helps alleviate that.
- np.clip(pos, self.pf.domain_left_edge + dx,
- self.pf.domain_right_edge - dx, pos)
- if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+ np.clip(pos, self.ds.domain_left_edge + dx,
+ self.ds.domain_right_edge - dx, pos)
+ if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0),
pos.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton[ind:ind+pos.shape[0]] = compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
return morton
def _count_particles(self, data_file):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/owls_subfind/data_structures.py
--- a/yt/frontends/halo_catalogs/owls_subfind/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/owls_subfind/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -45,8 +45,8 @@
YTQuantity
class OWLSSubfindParticleIndex(ParticleIndex):
- def __init__(self, pf, dataset_type):
- super(OWLSSubfindParticleIndex, self).__init__(pf, dataset_type)
+ def __init__(self, ds, dataset_type):
+ super(OWLSSubfindParticleIndex, self).__init__(ds, dataset_type)
def _calculate_particle_index_starts(self):
# Halo indices are not saved in the file, so we must count by hand.
@@ -78,21 +78,21 @@
def _detect_output_fields(self):
# TODO: Add additional fields
- pfl = []
+ dsl = []
units = {}
for dom in self.data_files[:1]:
fl, _units = self.io._identify_fields(dom)
units.update(_units)
dom._calculate_offsets(fl)
for f in fl:
- if f not in pfl: pfl.append(f)
- self.field_list = pfl
- pf = self.parameter_file
- pf.particle_types = tuple(set(pt for pt, pf in pfl))
+ if f not in dsl: dsl.append(f)
+ self.field_list = dsl
+ ds = self.dataset
+ ds.particle_types = tuple(set(pt for pt, ds in dsl))
# This is an attribute that means these particle types *actually*
# exist. As in, they are real, in the dataset.
- pf.field_units.update(units)
- pf.particle_types_raw = pf.particle_types
+ ds.field_units.update(units)
+ ds.particle_types_raw = ds.particle_types
def _setup_geometry(self):
super(OWLSSubfindParticleIndex, self)._setup_geometry()
@@ -100,8 +100,8 @@
self._calculate_file_offset_map()
class OWLSSubfindHDF5File(ParticleFile):
- def __init__(self, pf, io, filename, file_id):
- super(OWLSSubfindHDF5File, self).__init__(pf, io, filename, file_id)
+ def __init__(self, ds, io, filename, file_id):
+ super(OWLSSubfindHDF5File, self).__init__(ds, io, filename, file_id)
with h5py.File(filename, "r") as f:
self.header = dict((field, f.attrs[field]) \
for field in f.attrs.keys())
@@ -151,7 +151,7 @@
self.filename_template = "%s.%%(num)i.%s" % (prefix, suffix)
self.file_count = len(glob.glob(prefix + "*" + self._suffix))
if self.file_count == 0:
- raise YTException(message="No data files found.", pf=self)
+ raise YTException(message="No data files found.", ds=self)
self.particle_types = ("FOF", "SUBFIND")
self.particle_types_raw = ("FOF", "SUBFIND")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/owls_subfind/io.py
--- a/yt/frontends/halo_catalogs/owls_subfind/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/owls_subfind/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -30,8 +30,8 @@
class IOHandlerOWLSSubfindHDF5(BaseIOHandler):
_dataset_type = "subfind_hdf5"
- def __init__(self, pf):
- super(IOHandlerOWLSSubfindHDF5, self).__init__(pf)
+ def __init__(self, ds):
+ super(IOHandlerOWLSSubfindHDF5, self).__init__(ds)
self.offset_fields = set([])
def _read_fluid_selection(self, chunks, selector, fields, size):
@@ -121,31 +121,31 @@
with h5py.File(data_file.filename, "r") as f:
if not f.keys(): return None
dx = np.finfo(f["FOF"]['CenterOfMass'].dtype).eps
- dx = 2.0*self.pf.quan(dx, "code_length")
+ dx = 2.0*self.ds.quan(dx, "code_length")
for ptype, pattr in zip(["FOF", "SUBFIND"],
["Number_of_groups", "Number_of_subgroups"]):
my_pcount = f[ptype].attrs[pattr]
pos = f[ptype]["CenterOfMass"].value.astype("float64")
pos = np.resize(pos, (my_pcount, 3))
- pos = data_file.pf.arr(pos, "code_length")
+ pos = data_file.ds.arr(pos, "code_length")
# These are 32 bit numbers, so we give a little lee-way.
# Otherwise, for big sets of particles, we often will bump into the
# domain edges. This helps alleviate that.
- np.clip(pos, self.pf.domain_left_edge + dx,
- self.pf.domain_right_edge - dx, pos)
- if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+ np.clip(pos, self.ds.domain_left_edge + dx,
+ self.ds.domain_right_edge - dx, pos)
+ if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0),
pos.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton[ind:ind+pos.shape[0]] = compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
ind += pos.shape[0]
return morton
@@ -158,10 +158,10 @@
def _identify_fields(self, data_file):
fields = [(ptype, "particle_identifier")
- for ptype in self.pf.particle_types_raw]
+ for ptype in self.ds.particle_types_raw]
pcount = data_file.total_particles
with h5py.File(data_file.filename, "r") as f:
- for ptype in self.pf.particle_types_raw:
+ for ptype in self.ds.particle_types_raw:
my_fields, my_offset_fields = \
subfind_field_list(f[ptype], ptype, data_file.total_particles)
fields.extend(my_fields)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/rockstar/data_structures.py
--- a/yt/frontends/halo_catalogs/rockstar/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/rockstar/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -40,14 +40,14 @@
header_dt
class RockstarBinaryFile(ParticleFile):
- def __init__(self, pf, io, filename, file_id):
+ def __init__(self, ds, io, filename, file_id):
with open(filename, "rb") as f:
self.header = fpu.read_cattrs(f, header_dt, "=")
self._position_offset = f.tell()
f.seek(0, os.SEEK_END)
self._file_size = f.tell()
- super(RockstarBinaryFile, self).__init__(pf, io, filename, file_id)
+ super(RockstarBinaryFile, self).__init__(ds, io, filename, file_id)
class RockstarDataset(Dataset):
_index_class = ParticleIndex
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/halo_catalogs/rockstar/io.py
--- a/yt/frontends/halo_catalogs/rockstar/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/halo_catalogs/rockstar/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -92,29 +92,29 @@
halos = np.fromfile(f, dtype=halo_dt, count = pcount)
pos = np.empty((halos.size, 3), dtype="float64")
# These positions are in Mpc, *not* "code" units
- pos = data_file.pf.arr(pos, "code_length")
+ pos = data_file.ds.arr(pos, "code_length")
dx = np.finfo(halos['particle_position_x'].dtype).eps
- dx = 2.0*self.pf.quan(dx, "code_length")
+ dx = 2.0*self.ds.quan(dx, "code_length")
pos[:,0] = halos["particle_position_x"]
pos[:,1] = halos["particle_position_y"]
pos[:,2] = halos["particle_position_z"]
# These are 32 bit numbers, so we give a little lee-way.
# Otherwise, for big sets of particles, we often will bump into the
# domain edges. This helps alleviate that.
- np.clip(pos, self.pf.domain_left_edge + dx,
- self.pf.domain_right_edge - dx, pos)
+ np.clip(pos, self.ds.domain_left_edge + dx,
+ self.ds.domain_right_edge - dx, pos)
#del halos
- if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+ if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0),
pos.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton[ind:ind+pos.shape[0]] = compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
return morton
def _count_particles(self, data_file):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/moab/data_structures.py
--- a/yt/frontends/moab/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/moab/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -37,15 +37,14 @@
class MoabHex8Hierarchy(UnstructuredIndex):
- def __init__(self, pf, dataset_type='h5m'):
- self.parameter_file = weakref.proxy(pf)
+ def __init__(self, ds, dataset_type='h5m'):
+ self.dataset = weakref.proxy(ds)
self.dataset_type = dataset_type
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
self._fhandle = h5py.File(self.index_filename,'r')
- UnstructuredIndex.__init__(self, pf, dataset_type)
+ UnstructuredIndex.__init__(self, ds, dataset_type)
self._fhandle.close()
@@ -114,15 +113,14 @@
class PyneMeshHex8Hierarchy(UnstructuredIndex):
- def __init__(self, pf, dataset_type='moab_hex8_pyne'):
- self.parameter_file = weakref.proxy(pf)
+ def __init__(self, ds, dataset_type='moab_hex8_pyne'):
+ self.dataset = weakref.proxy(ds)
self.dataset_type = dataset_type
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.getcwd()
- self.pyne_mesh = pf.pyne_mesh
+ self.pyne_mesh = ds.pyne_mesh
- super(PyneMeshHex8Hierarchy, self).__init__(pf, dataset_type)
+ super(PyneMeshHex8Hierarchy, self).__init__(ds, dataset_type)
def _initialize_mesh(self):
from itaps import iBase, iMesh
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/moab/io.py
--- a/yt/frontends/moab/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/moab/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,9 +23,9 @@
class IOHandlerMoabH5MHex8(BaseIOHandler):
_dataset_type = "moab_hex8"
- def __init__(self, pf):
- super(IOHandlerMoabH5MHex8, self).__init__(pf)
- self._handle = pf._handle
+ def __init__(self, ds):
+ super(IOHandlerMoabH5MHex8, self).__init__(ds)
+ self._handle = ds._handle
def _read_fluid_selection(self, chunks, selector, fields, size):
chunks = list(chunks)
@@ -55,7 +55,7 @@
assert(len(chunks) == 1)
tags = {}
rv = {}
- pyne_mesh = self.pf.pyne_mesh
+ pyne_mesh = self.ds.pyne_mesh
mesh = pyne_mesh.mesh
for field in fields:
rv[field] = np.empty(size, dtype="float64")
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/moab/tests/test_c5.py
--- a/yt/frontends/moab/tests/test_c5.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/moab/tests/test_c5.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load, \
@@ -28,29 +28,29 @@
)
c5 = "c5/c5.h5m"
-@requires_pf(c5)
+@requires_ds(c5)
def test_cantor_5():
np.random.seed(0x4d3d3d3)
- pf = data_dir_load(c5)
- yield assert_equal, str(pf), "c5"
+ ds = data_dir_load(c5)
+ yield assert_equal, str(ds), "c5"
dso = [ None, ("sphere", ("c", (0.1, 'unitary'))),
("sphere", ("c", (0.2, 'unitary')))]
- dd = pf.h.all_data()
- yield assert_almost_equal, pf.index.get_smallest_dx(), 0.00411522633744843, 10
+ dd = ds.all_data()
+ yield assert_almost_equal, ds.index.get_smallest_dx(), 0.00411522633744843, 10
yield assert_equal, dd["x"].shape[0], 63*63*63
yield assert_almost_equal, \
dd["cell_volume"].in_units("code_length**3").sum(dtype="float64").d, \
1.0, 10
for offset_1 in [1e-9, 1e-4, 0.1]:
for offset_2 in [1e-9, 1e-4, 0.1]:
- DLE = pf.domain_left_edge
- DRE = pf.domain_right_edge
- ray = pf.ray(DLE + offset_1 * DLE.uq,
+ DLE = ds.domain_left_edge
+ DRE = ds.domain_right_edge
+ ray = ds.ray(DLE + offset_1 * DLE.uq,
DRE - offset_2 * DRE.uq)
yield assert_almost_equal, ray["dts"].sum(dtype="float64"), 1.0, 8
for i, p1 in enumerate(np.random.random((5, 3))):
for j, p2 in enumerate(np.random.random((5, 3))):
- ray = pf.ray(p1, p2)
+ ray = ds.ray(p1, p2)
yield assert_almost_equal, ray["dts"].sum(dtype="float64"), 1.0, 8
for field in _fields:
for ds in dso:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/pluto/data_structures.py
--- a/yt/frontends/pluto/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/pluto/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -67,13 +67,13 @@
if self.start_index != None:
return self.start_index
if self.Parent == []:
- iLE = self.LeftEdge - self.pf.domain_left_edge
+ iLE = self.LeftEdge - self.ds.domain_left_edge
start_index = iLE / self.dds
return np.rint(start_index).astype('int64').ravel()
pdx = self.Parent[0].dds
start_index = (self.Parent[0].get_global_startindex()) + \
np.rint((self.LeftEdge - self.Parent[0].LeftEdge)/pdx)
- self.start_index = (start_index*self.pf.refine_by).astype('int64').ravel()
+ self.start_index = (start_index*self.ds.refine_by).astype('int64').ravel()
return self.start_index
def _setup_dx(self):
@@ -85,21 +85,20 @@
grid = PlutoGrid
- def __init__(self,pf,dataset_type='pluto_hdf5'):
- self.domain_left_edge = pf.domain_left_edge
- self.domain_right_edge = pf.domain_right_edge
+ def __init__(self,ds,dataset_type='pluto_hdf5'):
+ self.domain_left_edge = ds.domain_left_edge
+ self.domain_right_edge = ds.domain_right_edge
self.dataset_type = dataset_type
self.field_indexes = {}
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
+ self.dataset = weakref.proxy(ds)
self.index_filename = os.path.abspath(
- self.parameter_file.parameter_filename)
- self.directory = pf.fullpath
- self._handle = pf._handle
+ self.dataset.parameter_filename)
+ self.directory = ds.fullpath
+ self._handle = ds._handle
self.float_type = self._handle['/level_0']['data:datatype=0'].dtype.name
self._levels = self._handle.keys()[2:]
- GridIndex.__init__(self,pf,dataset_type)
+ GridIndex.__init__(self,ds,dataset_type)
def _detect_output_fields(self):
ncomp = int(self._handle['/'].attrs['num_components'])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/pluto/io.py
--- a/yt/frontends/pluto/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/pluto/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,10 +25,10 @@
_offset_string = 'data:offsets=0'
_data_string = 'data:datatype=0'
- def __init__(self, pf, *args, **kwargs):
+ def __init__(self, ds, *args, **kwargs):
BaseIOHandler.__init__(self, *args, **kwargs)
- self.pf = pf
- self._handle = pf._handle
+ self.ds = ds
+ self._handle = ds._handle
_field_dict = None
@property
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/ramses/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -46,15 +46,15 @@
_last_mask = None
_last_selector_id = None
- def __init__(self, pf, domain_id):
- self.pf = pf
+ def __init__(self, ds, domain_id):
+ self.ds = ds
self.domain_id = domain_id
self.nvar = 0 # Set this later!
- num = os.path.basename(pf.parameter_filename).split("."
+ num = os.path.basename(ds.parameter_filename).split("."
)[0].split("_")[1]
basename = "%s/%%s_%s.out%05i" % (
os.path.abspath(
- os.path.dirname(pf.parameter_filename)),
+ os.path.dirname(ds.parameter_filename)),
num, domain_id)
for t in ['grav', 'hydro', 'part', 'amr']:
setattr(self, "%s_fn" % t, basename % t)
@@ -88,7 +88,7 @@
f = open(self.hydro_fn, "rb")
fpu.skip(f, 6)
# It goes: level, CPU, 8-variable
- min_level = self.pf.min_level
+ min_level = self.ds.min_level
n_levels = self.amr_header['nlevelmax'] - min_level
hydro_offset = np.zeros(n_levels, dtype='int64')
hydro_offset -= 1
@@ -195,8 +195,8 @@
# Now we iterate over each level and each CPU.
self.amr_header = hvals
self.amr_offset = f.tell()
- self.local_oct_count = hvals['numbl'][self.pf.min_level:, self.domain_id - 1].sum()
- self.total_oct_count = hvals['numbl'][self.pf.min_level:,:].sum(axis=0)
+ self.local_oct_count = hvals['numbl'][self.ds.min_level:, self.domain_id - 1].sum()
+ self.total_oct_count = hvals['numbl'][self.ds.min_level:,:].sum(axis=0)
def _read_amr(self):
"""Open the oct file, read in octs level-by-level.
@@ -205,9 +205,9 @@
The most important is finding all the information to feed
oct_handler.add
"""
- self.oct_handler = RAMSESOctreeContainer(self.pf.domain_dimensions/2,
- self.pf.domain_left_edge, self.pf.domain_right_edge)
- root_nodes = self.amr_header['numbl'][self.pf.min_level,:].sum()
+ self.oct_handler = RAMSESOctreeContainer(self.ds.domain_dimensions/2,
+ self.ds.domain_left_edge, self.ds.domain_right_edge)
+ root_nodes = self.amr_header['numbl'][self.ds.min_level,:].sum()
self.oct_handler.allocate_domains(self.total_oct_count, root_nodes)
fb = open(self.amr_fn, "rb")
fb.seek(self.amr_offset)
@@ -223,7 +223,7 @@
ng = self.ngridbound[c - self.amr_header['ncpu'] +
self.amr_header['nboundary']*l]
return ng
- min_level = self.pf.min_level
+ min_level = self.ds.min_level
max_level = min_level
nx, ny, nz = (((i-1.0)/2.0) for i in self.amr_header['nx'])
for level in range(self.amr_header['nlevelmax']):
@@ -238,8 +238,8 @@
pos[:,0] = fpu.read_vector(f, "d") - nx
pos[:,1] = fpu.read_vector(f, "d") - ny
pos[:,2] = fpu.read_vector(f, "d") - nz
- #pos *= self.pf.domain_width
- #pos += self.parameter_file.domain_left_edge
+ #pos *= self.ds.domain_width
+ #pos += self.dataset.domain_left_edge
fpu.skip(f, 31)
#parents = fpu.read_vector(f, "I")
#fpu.skip(f, 6)
@@ -280,9 +280,9 @@
print " extent [%s] : %s %s" % \
(ax, pos[:,i].min(), pos[:,i].max())
print " domain left : %s" % \
- (self.pf.domain_left_edge,)
+ (self.ds.domain_left_edge,)
print " domain right : %s" % \
- (self.pf.domain_right_edge,)
+ (self.ds.domain_right_edge,)
print " offset applied : %s %s %s" % \
(nn[0], nn[1], nn[2])
print "AMR Header:"
@@ -304,7 +304,7 @@
# Here we get a copy of the file, which we skip through and read the
# bits we want.
oct_handler = self.oct_handler
- all_fields = self.domain.pf.index.fluid_field_list
+ all_fields = self.domain.ds.index.fluid_field_list
fields = [f for ft, f in fields]
tr = {}
cell_count = selector.count_oct_cells(self.oct_handler, self.domain_id)
@@ -330,22 +330,21 @@
class RAMSESIndex(OctreeIndex):
- def __init__(self, pf, dataset_type='ramses'):
- self._pf = pf # TODO: Figure out the class composition better!
- self.fluid_field_list = pf._fields_in_file
+ def __init__(self, ds, dataset_type='ramses'):
+ self._ds = ds # TODO: Figure out the class composition better!
+ self.fluid_field_list = ds._fields_in_file
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
self.max_level = None
self.float_type = np.float64
- super(RAMSESIndex, self).__init__(pf, dataset_type)
+ super(RAMSESIndex, self).__init__(ds, dataset_type)
def _initialize_oct_handler(self):
- self.domains = [RAMSESDomainFile(self.parameter_file, i + 1)
- for i in range(self.parameter_file['ncpu'])]
+ self.domains = [RAMSESDomainFile(self.dataset, i + 1)
+ for i in range(self.dataset['ncpu'])]
total_octs = sum(dom.local_oct_count #+ dom.ngridbound.sum()
for dom in self.domains)
self.max_level = max(dom.max_level for dom in self.domains)
@@ -353,12 +352,12 @@
def _detect_output_fields(self):
# Do we want to attempt to figure out what the fields are in the file?
- pfl = set([])
+ dsl = set([])
if self.fluid_field_list is None or len(self.fluid_field_list) <= 0:
self._setup_auto_fields()
for domain in self.domains:
- pfl.update(set(domain.particle_field_offsets.keys()))
- self.particle_field_list = list(pfl)
+ dsl.update(set(domain.particle_field_offsets.keys()))
+ self.particle_field_list = list(dsl)
self.field_list = [("ramses", f) for f in self.fluid_field_list] \
+ self.particle_field_list
@@ -371,12 +370,12 @@
# TODO: copy/pasted from DomainFile; needs refactoring!
- num = os.path.basename(self._pf.parameter_filename).split("."
+ num = os.path.basename(self._ds.datasetname).split("."
)[0].split("_")[1]
testdomain = 1 # Just pick the first domain file to read
basename = "%s/%%s_%s.out%05i" % (
os.path.abspath(
- os.path.dirname(self._pf.parameter_filename)),
+ os.path.dirname(self._ds.datasetname)),
num, testdomain)
hydro_fn = basename % "hydro"
# Do we have a hydro file?
@@ -393,7 +392,7 @@
('gamma', 1, 'd')
)
hvals = fpu.read_attrs(f, hydro_header)
- self.pf.gamma = hvals['gamma']
+ self.ds.gamma = hvals['gamma']
nvar = hvals['nvar']
# OK, we got NVAR, now set up the arrays depending on what NVAR is
# Allow some wiggle room for users to add too many variables
@@ -434,7 +433,7 @@
base_region = getattr(dobj, "base_region", dobj)
if len(domains) > 1:
mylog.debug("Identified %s intersecting domains", len(domains))
- subsets = [RAMSESDomainSubset(base_region, domain, self.parameter_file)
+ subsets = [RAMSESDomainSubset(base_region, domain, self.dataset)
for domain in domains]
dobj._chunk_info = subsets
dobj._current_chunk = list(self._chunk_all(dobj))[0]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/ramses/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -96,10 +96,10 @@
units="K")
def create_cooling_fields(self, filename):
- num = os.path.basename(self.pf.parameter_filename).split("."
+ num = os.path.basename(self.ds.parameter_filename).split("."
)[0].split("_")[1]
filename = "%s/cooling_%05i.out" % (
- os.path.dirname(self.pf.parameter_filename), int(num))
+ os.path.dirname(self.ds.parameter_filename), int(num))
if not os.path.exists(filename): return
def _create_field(name, interp_object):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/ramses/io.py
--- a/yt/frontends/ramses/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/ramses/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -94,5 +94,5 @@
dt = subset.domain.particle_field_types[field]
tr[field] = fpu.read_vector(f, dt)
if field[1].startswith("particle_position"):
- np.divide(tr[field], subset.domain.pf["boxlen"], tr[field])
+ np.divide(tr[field], subset.domain.ds["boxlen"], tr[field])
return tr
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/ramses/tests/test_outputs.py
--- a/yt/frontends/ramses/tests/test_outputs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/ramses/tests/test_outputs.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
data_dir_load, \
PixelizedProjectionValuesTest, \
FieldValuesTest, \
@@ -27,10 +27,10 @@
("deposit", "all_density"), ("deposit", "all_count"))
output_00080 = "output_00080/info_00080.txt"
-@requires_pf(output_00080)
+@requires_ds(output_00080)
def test_output_00080():
- pf = data_dir_load(output_00080)
- yield assert_equal, str(pf), "info_00080"
+ ds = data_dir_load(output_00080)
+ yield assert_equal, str(ds), "info_00080"
dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
for ds in dso:
for field in _fields:
@@ -40,7 +40,7 @@
output_00080, axis, field, weight_field,
ds)
yield FieldValuesTest(output_00080, field, ds)
- dobj = create_obj(pf, ds)
+ dobj = create_obj(ds, ds)
s1 = dobj["ones"].sum()
s2 = sum(mask.sum() for block, mask in dobj.blocks)
yield assert_equal, s1, s2
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sdf/io.py
--- a/yt/frontends/sdf/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sdf/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
@property
def _handle(self):
- return self.pf.sdf_container
+ return self.ds.sdf_container
def _read_fluid_selection(self, chunks, selector, fields, size):
raise NotImplementedError
@@ -74,7 +74,7 @@
for field in field_list:
if field == "mass":
data = np.ones(mask.sum(), dtype="float64")
- data *= self.pf.parameters["particle_mass"]
+ data *= self.ds.parameters["particle_mass"]
else:
data = self._handle[field][mask]
yield (ptype, field), data
@@ -90,17 +90,17 @@
pos[:,0] = x[ind:ind+npart]
pos[:,1] = y[ind:ind+npart]
pos[:,2] = z[ind:ind+npart]
- if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+ if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0),
pos.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton[ind:ind+npart] = compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
ind += CHUNKSIZE
return morton
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sph/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -58,14 +58,14 @@
return unit
class GadgetBinaryFile(ParticleFile):
- def __init__(self, pf, io, filename, file_id):
+ def __init__(self, ds, io, filename, file_id):
with open(filename, "rb") as f:
- self.header = read_record(f, pf._header_spec)
+ self.header = read_record(f, ds._header_spec)
self._position_offset = f.tell()
f.seek(0, os.SEEK_END)
self._file_size = f.tell()
- super(GadgetBinaryFile, self).__init__(pf, io, filename, file_id)
+ super(GadgetBinaryFile, self).__init__(ds, io, filename, file_id)
def _calculate_offsets(self, field_list):
self.field_offsets = self.io._calculate_field_offsets(
@@ -357,11 +357,11 @@
def _calculate_offsets(self, field_list):
self.field_offsets = self.io._calculate_particle_offsets(self)
- def __init__(self, pf, io, filename, file_id):
+ def __init__(self, ds, io, filename, file_id):
# To go above 1 domain, we need to include an indexing step in the
# IOHandler, rather than simply reading from a single file.
assert file_id == 0
- super(TipsyFile, self).__init__(pf, io, filename, file_id)
+ super(TipsyFile, self).__init__(ds, io, filename, file_id)
io._create_dtypes(self)
io._update_domain(self)#Check automatically what the domain size is
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sph/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -92,7 +92,7 @@
class TipsyFieldInfo(SPHFieldInfo):
- def __init__(self, pf, field_list, slice_info = None):
+ def __init__(self, ds, field_list, slice_info = None):
aux_particle_fields = {
'uDotFB':("uDotFB", ("code_mass * code_velocity**2", ["uDotFB"], None)),
'uDotAV':("uDotAV", ("code_mass * code_velocity**2", ["uDotAV"], None)),
@@ -116,7 +116,7 @@
if field[1] in aux_particle_fields.keys() and \
aux_particle_fields[field[1]] not in self.known_particle_fields:
self.known_particle_fields += (aux_particle_fields[field[1]],)
- super(TipsyFieldInfo,self).__init__(pf, field_list, slice_info)
+ super(TipsyFieldInfo,self).__init__(ds, field_list, slice_info)
@@ -347,7 +347,7 @@
# create ionization table for this redshift
#--------------------------------------------------------
itab = oit.IonTableOWLS( fname )
- itab.set_iz( data.pf.current_redshift )
+ itab.set_iz( data.ds.current_redshift )
# find ion balance using log nH and log T
#--------------------------------------------------------
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sph/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -61,7 +61,7 @@
def var_mass(self):
if self._var_mass is None:
vm = []
- for i, v in enumerate(self.pf["Massarr"]):
+ for i, v in enumerate(self.ds["Massarr"]):
if v == 0:
vm.append(self._known_ptypes[i])
self._var_mass = tuple(vm)
@@ -108,7 +108,7 @@
ptype not in self.var_mass:
data = np.empty(mask.sum(), dtype="float64")
ind = self._known_ptypes.index(ptype)
- data[:] = self.pf["Massarr"][ind]
+ data[:] = self.ds["Massarr"][ind]
elif field in self._element_names:
rfield = 'ElementAbundance/' + field
@@ -134,17 +134,17 @@
dt = ds.dtype.newbyteorder("N") # Native
pos = np.empty(ds.shape, dtype=dt)
pos[:] = ds
- if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+ if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0),
pos.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton[ind:ind+pos.shape[0]] = compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
ind += pos.shape[0]
f.close()
return morton
@@ -160,8 +160,8 @@
def _identify_fields(self, data_file):
f = _get_h5_handle(data_file.filename)
fields = []
- cname = self.pf._particle_coordinates_name # Coordinates
- mname = self.pf._particle_mass_name # Mass
+ cname = self.ds._particle_coordinates_name # Coordinates
+ mname = self.ds._particle_mass_name # Mass
# loop over all keys in OWLS hdf5 file
#--------------------------------------------------
@@ -232,16 +232,16 @@
_var_mass = None
- def __init__(self, pf, *args, **kwargs):
- self._fields = pf._field_spec
- self._ptypes = pf._ptype_spec
- super(IOHandlerGadgetBinary, self).__init__(pf, *args, **kwargs)
+ def __init__(self, ds, *args, **kwargs):
+ self._fields = ds._field_spec
+ self._ptypes = ds._ptype_spec
+ super(IOHandlerGadgetBinary, self).__init__(ds, *args, **kwargs)
@property
def var_mass(self):
if self._var_mass is None:
vm = []
- for i, v in enumerate(self.pf["Massarr"]):
+ for i, v in enumerate(self.ds["Massarr"]):
if v == 0:
vm.append(self._ptypes[i])
self._var_mass = tuple(vm)
@@ -287,7 +287,7 @@
for field in field_list:
if field == "Mass" and ptype not in self.var_mass:
data = np.empty(mask.sum(), dtype="float64")
- m = self.pf.parameters["Massarr"][
+ m = self.ds.parameters["Massarr"][
self._ptypes.index(ptype)]
data[:] = m
yield (ptype, field), data
@@ -313,8 +313,8 @@
def _initialize_index(self, data_file, regions):
count = sum(data_file.total_particles.values())
- DLE = data_file.pf.domain_left_edge
- DRE = data_file.pf.domain_right_edge
+ DLE = data_file.ds.domain_left_edge
+ DRE = data_file.ds.domain_right_edge
dx = (DRE - DLE) / 2**_ORDER_MAX
with open(data_file.filename, "rb") as f:
# We add on an additionally 4 for the first record.
@@ -322,12 +322,12 @@
# The first total_particles * 3 values are positions
pp = np.fromfile(f, dtype = 'float32', count = count*3)
pp.shape = (count, 3)
- if np.any(pp.min(axis=0) < self.pf.domain_left_edge) or \
- np.any(pp.max(axis=0) > self.pf.domain_right_edge):
+ if np.any(pp.min(axis=0) < self.ds.domain_left_edge) or \
+ np.any(pp.max(axis=0) > self.ds.domain_right_edge):
raise YTDomainOverflow(pp.min(axis=0),
pp.max(axis=0),
- self.pf.domain_left_edge,
- self.pf.domain_right_edge)
+ self.ds.domain_left_edge,
+ self.ds.domain_right_edge)
regions.add_data_file(pp, data_file.file_id)
morton = compute_morton(pp[:,0], pp[:,1], pp[:,2], DLE, DRE)
return morton
@@ -446,7 +446,7 @@
raise RuntimeError
except ValueError:#binary/xdr
f = open(filename, 'rb')
- l = struct.unpack(data_file.pf.endian+"i", f.read(4))[0]
+ l = struct.unpack(data_file.ds.endian+"i", f.read(4))[0]
if l != np.sum(data_file.total_particles.values()):
print "Error reading auxiliary tipsy file"
raise RuntimeError
@@ -454,12 +454,12 @@
if field in ('iord', 'igasorder', 'grp'):#These fields are integers
dtype = 'i'
try:# If we try loading doubles by default, we can catch an exception and try floats next
- auxdata = np.array(struct.unpack(data_file.pf.endian+(l*dtype), f.read()))
+ auxdata = np.array(struct.unpack(data_file.ds.endian+(l*dtype), f.read()))
except struct.error:
f.seek(4)
dtype = 'f'
try:
- auxdata = np.array(struct.unpack(data_file.pf.endian+(l*dtype), f.read()))
+ auxdata = np.array(struct.unpack(data_file.ds.endian+(l*dtype), f.read()))
except struct.error: # None of the binary attempts to read succeeded
print "Error reading auxiliary tipsy file"
raise RuntimeError
@@ -547,16 +547,16 @@
bound the particles. It simply finds the largest position of the
whole set of particles, and sets the domain to +/- that value.
'''
- pf = data_file.pf
+ ds = data_file.ds
ind = 0
# Check to make sure that the domain hasn't already been set
# by the parameter file
- if np.all(np.isfinite(pf.domain_left_edge)) and np.all(np.isfinite(pf.domain_right_edge)):
+ if np.all(np.isfinite(ds.domain_left_edge)) and np.all(np.isfinite(ds.domain_right_edge)):
return
with open(data_file.filename, "rb") as f:
- pf.domain_left_edge = 0
- pf.domain_right_edge = 0
- f.seek(pf._header_offset)
+ ds.domain_left_edge = 0
+ ds.domain_right_edge = 0
+ f.seek(ds._header_offset)
mi = np.array([1e30, 1e30, 1e30], dtype="float64")
ma = -np.array([1e30, 1e30, 1e30], dtype="float64")
for iptype, ptype in enumerate(self._ptypes):
@@ -580,23 +580,23 @@
DW = ma - mi
mi -= 0.01 * DW
ma += 0.01 * DW
- pf.domain_left_edge = pf.arr(mi, 'code_length')
- pf.domain_right_edge = pf.arr(ma, 'code_length')
- pf.domain_width = DW = pf.domain_right_edge - pf.domain_left_edge
- pf.unit_registry.add("unitary", float(DW.max() * DW.units.cgs_value),
+ ds.domain_left_edge = ds.arr(mi, 'code_length')
+ ds.domain_right_edge = ds.arr(ma, 'code_length')
+ ds.domain_width = DW = ds.domain_right_edge - ds.domain_left_edge
+ ds.unit_registry.add("unitary", float(DW.max() * DW.units.cgs_value),
DW.units.dimensions)
def _initialize_index(self, data_file, regions):
- pf = data_file.pf
+ ds = data_file.ds
morton = np.empty(sum(data_file.total_particles.values()),
dtype="uint64")
ind = 0
- DLE, DRE = pf.domain_left_edge, pf.domain_right_edge
+ DLE, DRE = ds.domain_left_edge, ds.domain_right_edge
dx = (DRE - DLE) / (2**_ORDER_MAX)
self.domain_left_edge = DLE.in_units("code_length").ndarray_view()
self.domain_right_edge = DRE.in_units("code_length").ndarray_view()
with open(data_file.filename, "rb") as f:
- f.seek(pf._header_offset)
+ f.seek(ds._header_offset)
for iptype, ptype in enumerate(self._ptypes):
# We'll just add the individual types separately
count = data_file.total_particles[ptype]
@@ -614,11 +614,11 @@
mylog.debug("Spanning: %0.3e .. %0.3e in %s", mi, ma, ax)
mis[axi] = mi
mas[axi] = ma
- if np.any(mis < pf.domain_left_edge) or \
- np.any(mas > pf.domain_right_edge):
+ if np.any(mis < ds.domain_left_edge) or \
+ np.any(mas > ds.domain_right_edge):
raise YTDomainOverflow(mis, mas,
- pf.domain_left_edge,
- pf.domain_right_edge)
+ ds.domain_left_edge,
+ ds.domain_right_edge)
pos = np.empty((pp.size, 3), dtype="float64")
for i, ax in enumerate("xyz"):
eps = np.finfo(pp["Coordinates"][ax].dtype).eps
@@ -635,9 +635,9 @@
def _count_particles(self, data_file):
npart = {
- "Gas": data_file.pf.parameters['nsph'],
- "Stars": data_file.pf.parameters['nstar'],
- "DarkMatter": data_file.pf.parameters['ndark']
+ "Gas": data_file.ds.parameters['nsph'],
+ "Stars": data_file.ds.parameters['nstar'],
+ "DarkMatter": data_file.ds.parameters['ndark']
}
return npart
@@ -659,15 +659,15 @@
def _create_dtypes(self, data_file):
# We can just look at the particle counts.
- self._header_offset = data_file.pf._header_offset
+ self._header_offset = data_file.ds._header_offset
self._pdtypes = {}
pds = {}
field_list = []
tp = data_file.total_particles
aux_filenames = glob.glob(data_file.filename+'.*') # Find out which auxiliaries we have
self._aux_fields = [f[1+len(data_file.filename):] for f in aux_filenames]
- self._pdtypes = self._compute_dtypes(data_file.pf._field_dtypes,
- data_file.pf.endian)
+ self._pdtypes = self._compute_dtypes(data_file.ds._field_dtypes,
+ data_file.ds.endian)
for ptype, field in self._fields:
if tp[ptype] == 0:
# We do not want out _pdtypes to have empty particles.
@@ -688,7 +688,7 @@
def _calculate_particle_offsets(self, data_file):
field_offsets = {}
- pos = data_file.pf._header_offset
+ pos = data_file.ds._header_offset
for ptype in self._ptypes:
field_offsets[ptype] = pos
if data_file.total_particles[ptype] == 0: continue
@@ -700,13 +700,13 @@
_dataset_type = "http_particle_stream"
_vector_fields = ("Coordinates", "Velocity", "Velocities")
- def __init__(self, pf):
+ def __init__(self, ds):
if requests is None:
raise RuntimeError
- self._url = pf.base_url
+ self._url = ds.base_url
# This should eventually manage the IO and cache it
self.total_bytes = 0
- super(IOHandlerHTTPStream, self).__init__(pf)
+ super(IOHandlerHTTPStream, self).__init__(ds)
def _open_stream(self, data_file, field):
# This does not actually stream yet!
@@ -722,7 +722,7 @@
def _identify_fields(self, data_file):
f = []
- for ftype, fname in self.pf.parameters["field_list"]:
+ for ftype, fname in self.ds.parameters["field_list"]:
f.append((str(ftype), str(fname)))
return f, {}
@@ -763,7 +763,7 @@
yield (ptype, field), data
def _initialize_index(self, data_file, regions):
- header = self.pf.parameters
+ header = self.ds.parameters
ptypes = header["particle_count"][data_file.file_id].keys()
pcount = sum(header["particle_count"][data_file.file_id].values())
morton = np.empty(pcount, dtype='uint64')
@@ -775,10 +775,10 @@
regions.add_data_file(c, data_file.file_id)
morton[ind:ind+c.shape[0]] = compute_morton(
c[:,0], c[:,1], c[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
ind += c.shape[0]
return morton
def _count_particles(self, data_file):
- return self.pf.parameters["particle_count"][data_file.file_id]
+ return self.ds.parameters["particle_count"][data_file.file_id]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sph/tests/test_owls.py
--- a/yt/frontends/sph/tests/test_owls.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sph/tests/test_owls.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load, \
@@ -30,16 +30,16 @@
("deposit", "PartType4_density"))
os33 = "snapshot_033/snap_033.0.hdf5"
-@requires_pf(os33)
+@requires_ds(os33)
def test_snapshot_033():
- pf = data_dir_load(os33)
- yield assert_equal, str(pf), "snap_033"
+ ds = data_dir_load(os33)
+ yield assert_equal, str(ds), "snap_033"
dso = [ None, ("sphere", ("c", (0.1, 'unitary')))]
- dd = pf.h.all_data()
+ dd = ds.all_data()
yield assert_equal, dd["particle_position"].shape[0], 2*(128*128*128)
yield assert_equal, dd["particle_position"].shape[1], 3
tot = sum(dd[ptype,"particle_position"].shape[0]
- for ptype in pf.particle_types if ptype != "all")
+ for ptype in ds.particle_types if ptype != "all")
yield assert_equal, tot, (2*128*128*128)
for ds in dso:
for field in _fields:
@@ -49,7 +49,7 @@
os33, axis, field, weight_field,
ds)
yield FieldValuesTest(os33, field, ds)
- dobj = create_obj(pf, ds)
+ dobj = create_obj(ds, ds)
s1 = dobj["ones"].sum()
s2 = sum(mask.sum() for block, mask in dobj.blocks)
yield assert_equal, s1, s2
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/sph/tests/test_tipsy.py
--- a/yt/frontends/sph/tests/test_tipsy.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/sph/tests/test_tipsy.py Sun Jun 15 19:50:51 2014 -0700
@@ -16,7 +16,7 @@
from yt.testing import *
from yt.utilities.answer_testing.framework import \
- requires_pf, \
+ requires_ds, \
small_patch_amr, \
big_patch_amr, \
data_dir_load, \
@@ -31,7 +31,7 @@
)
pkdgrav = "halo1e11_run1.00400/halo1e11_run1.00400"
-@requires_pf(pkdgrav, file_check = True)
+@requires_ds(pkdgrav, file_check = True)
def test_pkdgrav():
cosmology_parameters = dict(current_redshift = 0.0,
omega_lambda = 0.728,
@@ -41,29 +41,29 @@
cosmology_parameters = cosmology_parameters,
unit_base = {'length': (1.0/60.0, "Mpccm/h")},
n_ref = 64)
- pf = data_dir_load(pkdgrav, TipsyDataset, (), kwargs)
- yield assert_equal, str(pf), "halo1e11_run1.00400"
+ ds = data_dir_load(pkdgrav, TipsyDataset, (), kwargs)
+ yield assert_equal, str(ds), "halo1e11_run1.00400"
dso = [ None, ("sphere", ("c", (0.3, 'unitary')))]
- dd = pf.h.all_data()
+ dd = ds.all_data()
yield assert_equal, dd["Coordinates"].shape, (26847360, 3)
tot = sum(dd[ptype,"Coordinates"].shape[0]
- for ptype in pf.particle_types if ptype != "all")
+ for ptype in ds.particle_types if ptype != "all")
yield assert_equal, tot, 26847360
for ds in dso:
for field in _fields:
for axis in [0, 1, 2]:
for weight_field in [None, "density"]:
yield PixelizedProjectionValuesTest(
- pf, axis, field, weight_field,
+ ds, axis, field, weight_field,
ds)
- yield FieldValuesTest(pf, field, ds)
- dobj = create_obj(pf, ds)
+ yield FieldValuesTest(ds, field, ds)
+ dobj = create_obj(ds, ds)
s1 = dobj["ones"].sum()
s2 = sum(mask.sum() for block, mask in dobj.blocks)
yield assert_equal, s1, s2
gasoline = "agora_1e11.00400/agora_1e11.00400"
-@requires_pf(gasoline, file_check = True)
+@requires_ds(gasoline, file_check = True)
def test_gasoline():
cosmology_parameters = dict(current_redshift = 0.0,
omega_lambda = 0.728,
@@ -72,23 +72,23 @@
kwargs = dict(cosmology_parameters = cosmology_parameters,
unit_base = {'length': (1.0/60.0, "Mpccm/h")},
n_ref = 64)
- pf = data_dir_load(gasoline, TipsyDataset, (), kwargs)
- yield assert_equal, str(pf), "agora_1e11.00400"
+ ds = data_dir_load(gasoline, TipsyDataset, (), kwargs)
+ yield assert_equal, str(ds), "agora_1e11.00400"
dso = [ None, ("sphere", ("c", (0.3, 'unitary')))]
- dd = pf.h.all_data()
+ dd = ds.all_data()
yield assert_equal, dd["Coordinates"].shape, (10550576, 3)
tot = sum(dd[ptype,"Coordinates"].shape[0]
- for ptype in pf.particle_types if ptype != "all")
+ for ptype in ds.particle_types if ptype != "all")
yield assert_equal, tot, 10550576
for ds in dso:
for field in _fields:
for axis in [0, 1, 2]:
for weight_field in [None, "density"]:
yield PixelizedProjectionValuesTest(
- pf, axis, field, weight_field,
+ ds, axis, field, weight_field,
ds)
- yield FieldValuesTest(pf, field, ds)
- dobj = create_obj(pf, ds)
+ yield FieldValuesTest(ds, field, ds)
+ dobj = create_obj(ds, ds)
s1 = dobj["ones"].sum()
s2 = sum(mask.sum() for block, mask in dobj.blocks)
yield assert_equal, s1, s2
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/stream/data_structures.py Sun Jun 15 19:50:51 2014 -0700
@@ -93,7 +93,7 @@
self.Level = -1
def _guess_properties_from_parent(self):
- rf = self.pf.refine_by
+ rf = self.ds.refine_by
my_ind = self.id - self._id_offset
le = self.LeftEdge
self.dds = self.Parent.dds/rf
@@ -159,14 +159,14 @@
grid = StreamGrid
- def __init__(self, pf, dataset_type = None):
+ def __init__(self, ds, dataset_type = None):
self.dataset_type = dataset_type
self.float_type = 'float64'
- self.parameter_file = weakref.proxy(pf) # for _obtain_enzo
- self.stream_handler = pf.stream_handler
+ self.dataset = weakref.proxy(ds) # for _obtain_enzo
+ self.stream_handler = ds.stream_handler
self.float_type = "float64"
self.directory = os.getcwd()
- GridIndex.__init__(self, pf, dataset_type)
+ GridIndex.__init__(self, ds, dataset_type)
def _count_grids(self):
self.num_grids = self.stream_handler.num_grids
@@ -248,7 +248,7 @@
if self.stream_handler.io is not None:
self.io = self.stream_handler.io
else:
- self.io = io_registry[self.dataset_type](self.pf)
+ self.io = io_registry[self.dataset_type](self.ds)
def update_data(self, data, units = None):
@@ -280,11 +280,11 @@
# We only want to create a superset of fields here.
self._detect_output_fields()
- self.pf.create_field_info()
+ self.ds.create_field_info()
mylog.debug("Creating Particle Union 'all'")
- pu = ParticleUnion("all", list(self.pf.particle_types_raw))
- self.pf.add_particle_union(pu)
- self.pf.particle_types = tuple(set(self.pf.particle_types))
+ pu = ParticleUnion("all", list(self.ds.particle_types_raw))
+ self.ds.add_particle_union(pu)
+ self.ds.particle_types = tuple(set(self.ds.particle_types))
class StreamDataset(Dataset):
@@ -302,8 +302,8 @@
self.geometry = geometry
self.stream_handler = stream_handler
name = "InMemoryParameterFile_%s" % (uuid.uuid4().hex)
- from yt.data_objects.static_output import _cached_pfs
- _cached_pfs[name] = self
+ from yt.data_objects.static_output import _cached_datasets
+ _cached_datasets[name] = self
Dataset.__init__(self, name, self._dataset_type)
def _parse_parameter_file(self):
@@ -399,7 +399,7 @@
return particle_types
-def assign_particle_data(pf, pdata) :
+def assign_particle_data(ds, pdata) :
"""
Assign particle data to the grids using find_points. This
@@ -412,25 +412,25 @@
# most of the GridTree utilizing information we already have from the
# stream handler.
- if len(pf.stream_handler.fields) > 1:
+ if len(ds.stream_handler.fields) > 1:
try:
x, y, z = (pdata["io","particle_position_%s" % ax] for ax in 'xyz')
except KeyError:
raise KeyError("Cannot decompose particle data without position fields!")
- num_grids = len(pf.stream_handler.fields)
- parent_ids = pf.stream_handler.parent_ids
+ num_grids = len(ds.stream_handler.fields)
+ parent_ids = ds.stream_handler.parent_ids
num_children = np.zeros(num_grids, dtype='int64')
# We're going to do this the slow way
mask = np.empty(num_grids, dtype="bool")
for i in xrange(num_grids):
np.equal(parent_ids, i, mask)
num_children[i] = mask.sum()
- levels = pf.stream_handler.levels.astype("int64").ravel()
+ levels = ds.stream_handler.levels.astype("int64").ravel()
grid_tree = GridTree(num_grids,
- pf.stream_handler.left_edges,
- pf.stream_handler.right_edges,
- pf.stream_handler.parent_ids,
+ ds.stream_handler.left_edges,
+ ds.stream_handler.right_edges,
+ ds.stream_handler.parent_ids,
levels, num_children)
pts = MatchPointsToGrids(grid_tree, len(x), x, y, z)
@@ -459,10 +459,10 @@
else :
grid_pdata = [pdata]
- for pd, gi in zip(grid_pdata, sorted(pf.stream_handler.fields)):
- pf.stream_handler.fields[gi].update(pd)
- npart = pf.stream_handler.fields[gi].pop("number_of_particles", 0)
- pf.stream_handler.particle_count[gi] = npart
+ for pd, gi in zip(grid_pdata, sorted(ds.stream_handler.fields)):
+ ds.stream_handler.fields[gi].update(pd)
+ npart = ds.stream_handler.fields[gi].pop("number_of_particles", 0)
+ ds.stream_handler.particle_count[gi] = npart
def unitify_data(data):
if all([isinstance(val, np.ndarray) for val in data.values()]):
@@ -563,17 +563,17 @@
>>> arr = np.random.random((128, 128, 128))
>>> data = dict(density = arr)
- >>> pf = load_uniform_grid(data, arr.shape, length_unit='cm',
+ >>> ds = load_uniform_grid(data, arr.shape, length_unit='cm',
bbox=bbox, nprocs=12)
- >>> dd = pf.h.all_data()
+ >>> dd = ds.all_data()
>>> dd['Density']
#FIXME
YTArray[123.2856, 123.854, ..., 123.456, 12.42] (code_mass/code_length^3)
>>> data = dict(density = (arr, 'g/cm**3'))
- >>> pf = load_uniform_grid(data, arr.shape, 3.03e24, bbox=bbox, nprocs=12)
- >>> dd = pf.h.all_data()
+ >>> ds = load_uniform_grid(data, arr.shape, 3.03e24, bbox=bbox, nprocs=12)
+ >>> dd = ds.all_data()
>>> dd['Density']
#FIXME
@@ -664,7 +664,7 @@
handler.simulation_time = sim_time
handler.cosmology_simulation = 0
- spf = StreamDataset(handler, geometry = geometry)
+ sds = StreamDataset(handler, geometry = geometry)
# Now figure out where the particles go
if number_of_particles > 0 :
@@ -677,9 +677,9 @@
pdata_ftype.update(pdata)
pdata = pdata_ftype
# This will update the stream handler too
- assign_particle_data(spf, pdata)
+ assign_particle_data(sds, pdata)
- return spf
+ return sds
def load_amr_grids(grid_data, domain_dimensions,
field_units=None, bbox=None, sim_time=0.0, length_unit=None,
@@ -750,7 +750,7 @@
... g["Density"] = np.random.random(g["dimensions"]) * 2**g["level"]
...
>>> units = dict(Density='g/cm**3')
- >>> pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+ >>> ds = load_amr_grids(grid_data, [32, 32, 32], 1.0)
"""
domain_dimensions = np.array(domain_dimensions)
@@ -825,17 +825,17 @@
handler.simulation_time = sim_time
handler.cosmology_simulation = 0
- spf = StreamDataset(handler, geometry = geometry)
- return spf
+ sds = StreamDataset(handler, geometry = geometry)
+ return sds
-def refine_amr(base_pf, refinement_criteria, fluid_operators, max_level,
+def refine_amr(base_ds, refinement_criteria, fluid_operators, max_level,
callback = None):
- r"""Given a base parameter file, repeatedly apply refinement criteria and
+ r"""Given a base dataset, repeatedly apply refinement criteria and
fluid operators until a maximum level is reached.
Parameters
----------
- base_pf : Dataset
+ base_ds : Dataset
This is any static output. It can also be a stream static output, for
instance as returned by load_uniform_data.
refinement_critera : list of :class:`~yt.utilities.flagging_methods.FlaggingMethod`
@@ -848,7 +848,7 @@
The maximum level to which the data will be refined
callback : function, optional
A function that will be called at the beginning of each refinement
- cycle, with the current parameter file.
+ cycle, with the current dataset.
Examples
--------
@@ -857,67 +857,67 @@
>>> fo = [ic.CoredSphere(0.05, 0.3, [0.7,0.4,0.75], {"Density": (0.25, 100.0)})]
>>> rc = [fm.flagging_method_registry["overdensity"](8.0)]
>>> ug = load_uniform_grid({'Density': data}, domain_dims, 1.0)
- >>> pf = refine_amr(ug, rc, fo, 5)
+ >>> ds = refine_amr(ug, rc, fo, 5)
"""
# If we have particle data, set it aside for now
number_of_particles = np.sum([grid.NumberOfParticles
- for grid in base_pf.index.grids])
+ for grid in base_ds.index.grids])
if number_of_particles > 0 :
pdata = {}
- for field in base_pf.field_list :
+ for field in base_ds.field_list :
if not isinstance(field, tuple):
field = ("unknown", field)
- fi = base_pf._get_field_info(*field)
+ fi = base_ds._get_field_info(*field)
if fi.particle_type :
pdata[field] = uconcatenate([grid[field]
- for grid in base_pf.index.grids])
+ for grid in base_ds.index.grids])
pdata["number_of_particles"] = number_of_particles
- last_gc = base_pf.index.num_grids
+ last_gc = base_ds.index.num_grids
cur_gc = -1
- pf = base_pf
- bbox = np.array( [ (pf.domain_left_edge[i], pf.domain_right_edge[i])
+ ds = base_ds
+ bbox = np.array( [ (ds.domain_left_edge[i], ds.domain_right_edge[i])
for i in range(3) ])
- while pf.h.max_level < max_level and last_gc != cur_gc:
+ while ds.index.max_level < max_level and last_gc != cur_gc:
mylog.info("Refining another level. Current max level: %s",
- pf.h.max_level)
- last_gc = pf.index.grids.size
- for m in fluid_operators: m.apply(pf)
- if callback is not None: callback(pf)
+ ds.index.max_level)
+ last_gc = ds.index.grids.size
+ for m in fluid_operators: m.apply(ds)
+ if callback is not None: callback(ds)
grid_data = []
- for g in pf.index.grids:
+ for g in ds.index.grids:
gd = dict( left_edge = g.LeftEdge,
right_edge = g.RightEdge,
level = g.Level,
dimensions = g.ActiveDimensions )
- for field in pf.field_list:
+ for field in ds.field_list:
if not isinstance(field, tuple):
field = ("unknown", field)
- fi = pf._get_field_info(*field)
+ fi = ds._get_field_info(*field)
if not fi.particle_type :
gd[field] = g[field]
grid_data.append(gd)
- if g.Level < pf.h.max_level: continue
+ if g.Level < ds.index.max_level: continue
fg = FlaggingGrid(g, refinement_criteria)
nsg = fg.find_subgrids()
for sg in nsg:
- LE = sg.left_index * g.dds + pf.domain_left_edge
- dims = sg.dimensions * pf.refine_by
- grid = pf.smoothed_covering_grid(g.Level + 1, LE, dims)
+ LE = sg.left_index * g.dds + ds.domain_left_edge
+ dims = sg.dimensions * ds.refine_by
+ grid = ds.smoothed_covering_grid(g.Level + 1, LE, dims)
gd = dict(left_edge = LE, right_edge = grid.right_edge,
level = g.Level + 1, dimensions = dims)
- for field in pf.field_list:
+ for field in ds.field_list:
if not isinstance(field, tuple):
field = ("unknown", field)
- fi = pf._get_field_info(*field)
+ fi = ds._get_field_info(*field)
if not fi.particle_type :
gd[field] = grid[field]
grid_data.append(gd)
- pf = load_amr_grids(grid_data, pf.domain_dimensions, 1.0,
+ ds = load_amr_grids(grid_data, ds.domain_dimensions, 1.0,
bbox = bbox)
if number_of_particles > 0:
if ("io", "particle_position_x") not in pdata:
@@ -928,26 +928,26 @@
pdata_ftype["io",f] = pdata.pop(f)
pdata_ftype.update(pdata)
pdata = pdata_ftype
- assign_particle_data(pf, pdata)
+ assign_particle_data(ds, pdata)
# We need to reassign the field list here.
- cur_gc = pf.index.num_grids
+ cur_gc = ds.index.num_grids
# Now reassign particle data to grids
- return pf
+ return ds
class StreamParticleIndex(ParticleIndex):
- def __init__(self, pf, dataset_type = None):
- self.stream_handler = pf.stream_handler
- super(StreamParticleIndex, self).__init__(pf, dataset_type)
+ def __init__(self, ds, dataset_type = None):
+ self.stream_handler = ds.stream_handler
+ super(StreamParticleIndex, self).__init__(ds, dataset_type)
def _setup_data_io(self):
if self.stream_handler.io is not None:
self.io = self.stream_handler.io
else:
- self.io = io_registry[self.dataset_type](self.pf)
+ self.io = io_registry[self.dataset_type](self.ds)
class StreamParticleFile(ParticleFile):
pass
@@ -1010,7 +1010,7 @@
... particle_position_y = pos[1],
... particle_position_z = pos[2])
>>> bbox = np.array([[0., 1.0], [0.0, 1.0], [0.0, 1.0]])
- >>> pf = load_particles(data, 3.08e24, bbox=bbox)
+ >>> ds = load_particles(data, 3.08e24, bbox=bbox)
"""
@@ -1077,11 +1077,11 @@
handler.simulation_time = sim_time
handler.cosmology_simulation = 0
- spf = StreamParticlesDataset(handler)
- spf.n_ref = n_ref
- spf.over_refine_factor = over_refine_factor
+ sds = StreamParticlesDataset(handler)
+ sds.n_ref = n_ref
+ sds.over_refine_factor = over_refine_factor
- return spf
+ return sds
_cis = np.fromiter(chain.from_iterable(product([0,1], [0,1], [0,1])),
dtype=np.int64, count = 8*3)
@@ -1108,9 +1108,9 @@
class StreamHexahedralHierarchy(UnstructuredIndex):
- def __init__(self, pf, dataset_type = None):
- self.stream_handler = pf.stream_handler
- super(StreamHexahedralHierarchy, self).__init__(pf, dataset_type)
+ def __init__(self, ds, dataset_type = None):
+ self.stream_handler = ds.stream_handler
+ super(StreamHexahedralHierarchy, self).__init__(ds, dataset_type)
def _initialize_mesh(self):
coords = self.stream_handler.fields.pop('coordinates')
@@ -1122,7 +1122,7 @@
if self.stream_handler.io is not None:
self.io = self.stream_handler.io
else:
- self.io = io_registry[self.dataset_type](self.pf)
+ self.io = io_registry[self.dataset_type](self.ds)
def _detect_output_fields(self):
self.field_list = list(set(self.stream_handler.get_fields()))
@@ -1229,25 +1229,25 @@
handler.simulation_time = sim_time
handler.cosmology_simulation = 0
- spf = StreamHexahedralDataset(handler, geometry = geometry)
+ sds = StreamHexahedralDataset(handler, geometry = geometry)
- return spf
+ return sds
class StreamOctreeSubset(OctreeSubset):
domain_id = 1
_domain_offset = 1
- def __init__(self, base_region, pf, oct_handler, over_refine_factor = 1):
+ def __init__(self, base_region, ds, oct_handler, over_refine_factor = 1):
self._num_zones = 1 << (over_refine_factor)
self.field_data = YTFieldData()
self.field_parameters = {}
- self.pf = pf
- self.index = self.pf.index
+ self.ds = ds
+ self.index = self.ds.index
self.oct_handler = oct_handler
self._last_mask = None
self._last_selector_id = None
self._current_particle_type = 'io'
- self._current_fluid_type = self.pf.default_fluid_type
+ self._current_fluid_type = self.ds.default_fluid_type
self.base_region = base_region
self.base_selector = base_region.selector
@@ -1268,32 +1268,32 @@
class StreamOctreeHandler(OctreeIndex):
- def __init__(self, pf, dataset_type = None):
- self.stream_handler = pf.stream_handler
+ def __init__(self, ds, dataset_type = None):
+ self.stream_handler = ds.stream_handler
self.dataset_type = dataset_type
- super(StreamOctreeHandler, self).__init__(pf, dataset_type)
+ super(StreamOctreeHandler, self).__init__(ds, dataset_type)
def _setup_data_io(self):
if self.stream_handler.io is not None:
self.io = self.stream_handler.io
else:
- self.io = io_registry[self.dataset_type](self.pf)
+ self.io = io_registry[self.dataset_type](self.ds)
def _initialize_oct_handler(self):
header = dict(dims = [1, 1, 1],
- left_edge = self.pf.domain_left_edge,
- right_edge = self.pf.domain_right_edge,
- octree = self.pf.octree_mask,
- over_refine = self.pf.over_refine_factor,
- partial_coverage = self.pf.partial_coverage)
+ left_edge = self.ds.domain_left_edge,
+ right_edge = self.ds.domain_right_edge,
+ octree = self.ds.octree_mask,
+ over_refine = self.ds.over_refine_factor,
+ partial_coverage = self.ds.partial_coverage)
self.oct_handler = OctreeContainer.load_octree(header)
def _identify_base_chunk(self, dobj):
if getattr(dobj, "_chunk_info", None) is None:
base_region = getattr(dobj, "base_region", dobj)
- subset = [StreamOctreeSubset(base_region, self.parameter_file,
+ subset = [StreamOctreeSubset(base_region, self.dataset,
self.oct_handler,
- self.pf.over_refine_factor)]
+ self.ds.over_refine_factor)]
dobj._chunk_info = subset
dobj._current_chunk = list(self._chunk_all(dobj))[0]
@@ -1415,15 +1415,15 @@
handler.simulation_time = sim_time
handler.cosmology_simulation = 0
- spf = StreamOctreeDataset(handler)
- spf.octree_mask = octree_mask
- spf.partial_coverage = partial_coverage
- spf.units["cm"] = sim_unit_to_cm
- spf.units['1'] = 1.0
- spf.units["unitary"] = 1.0
+ sds = StreamOctreeDataset(handler)
+ sds.octree_mask = octree_mask
+ sds.partial_coverage = partial_coverage
+ sds.units["cm"] = sim_unit_to_cm
+ sds.units['1'] = 1.0
+ sds.units["unitary"] = 1.0
box_in_mpc = sim_unit_to_cm / mpc_conversion['cm']
- spf.over_refine_factor = over_refine_factor
+ sds.over_refine_factor = over_refine_factor
for unit in mpc_conversion.keys():
- spf.units[unit] = mpc_conversion[unit] * box_in_mpc
+ sds.units[unit] = mpc_conversion[unit] * box_in_mpc
- return spf
+ return sds
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/stream/fields.py
--- a/yt/frontends/stream/fields.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/stream/fields.py Sun Jun 15 19:50:51 2014 -0700
@@ -67,8 +67,8 @@
)
def setup_fluid_fields(self):
- for field in self.pf.stream_handler.field_units:
- units = self.pf.stream_handler.field_units[field]
+ for field in self.ds.stream_handler.field_units:
+ units = self.ds.stream_handler.field_units[field]
if units != '': self.add_output_field(field, units=units)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/stream/io.py Sun Jun 15 19:50:51 2014 -0700
@@ -29,10 +29,10 @@
_dataset_type = "stream"
- def __init__(self, pf):
- self.fields = pf.stream_handler.fields
- self.field_units = pf.stream_handler.field_units
- super(IOHandlerStream, self).__init__(pf)
+ def __init__(self, ds):
+ self.fields = ds.stream_handler.fields
+ self.field_units = ds.stream_handler.field_units
+ super(IOHandlerStream, self).__init__(ds)
def _read_data_set(self, grid, field):
# This is where we implement processor-locking
@@ -51,7 +51,7 @@
raise NotImplementedError
rv = {}
for field in fields:
- rv[field] = self.pf.arr(np.empty(size, dtype="float64"))
+ rv[field] = self.ds.arr(np.empty(size, dtype="float64"))
ng = sum(len(c.objs) for c in chunks)
mylog.debug("Reading %s cells of %s fields in %s blocks",
size, [f2 for f1, f2 in fields], ng)
@@ -98,9 +98,9 @@
_dataset_type = "stream_particles"
- def __init__(self, pf):
- self.fields = pf.stream_handler.fields
- super(StreamParticleIOHandler, self).__init__(pf)
+ def __init__(self, ds):
+ self.fields = ds.stream_handler.fields
+ super(StreamParticleIOHandler, self).__init__(ds)
def _read_particle_coords(self, chunks, ptf):
chunks = list(chunks)
@@ -135,25 +135,25 @@
def _initialize_index(self, data_file, regions):
# self.fields[g.id][fname] is the pattern here
morton = []
- for ptype in self.pf.particle_types_raw:
+ for ptype in self.ds.particle_types_raw:
pos = np.column_stack(self.fields[data_file.filename][
(ptype, "particle_position_%s" % ax)]
for ax in 'xyz')
- if np.any(pos.min(axis=0) < data_file.pf.domain_left_edge) or \
- np.any(pos.max(axis=0) > data_file.pf.domain_right_edge):
+ if np.any(pos.min(axis=0) < data_file.ds.domain_left_edge) or \
+ np.any(pos.max(axis=0) > data_file.ds.domain_right_edge):
raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge)
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge)
regions.add_data_file(pos, data_file.file_id)
morton.append(compute_morton(
pos[:,0], pos[:,1], pos[:,2],
- data_file.pf.domain_left_edge,
- data_file.pf.domain_right_edge))
+ data_file.ds.domain_left_edge,
+ data_file.ds.domain_right_edge))
return np.concatenate(morton)
def _count_particles(self, data_file):
pcount = {}
- for ptype in self.pf.particle_types_raw:
+ for ptype in self.ds.particle_types_raw:
d = self.fields[data_file.filename]
pcount[ptype] = d[ptype, "particle_position_x"].size
return pcount
@@ -164,9 +164,9 @@
class IOHandlerStreamHexahedral(BaseIOHandler):
_dataset_type = "stream_hexahedral"
- def __init__(self, pf):
- self.fields = pf.stream_handler.fields
- super(IOHandlerStreamHexahedral, self).__init__(pf)
+ def __init__(self, ds):
+ self.fields = ds.stream_handler.fields
+ super(IOHandlerStreamHexahedral, self).__init__(ds)
def _read_fluid_selection(self, chunks, selector, fields, size):
chunks = list(chunks)
@@ -193,9 +193,9 @@
class IOHandlerStreamOctree(BaseIOHandler):
_dataset_type = "stream_octree"
- def __init__(self, pf):
- self.fields = pf.stream_handler.fields
- super(IOHandlerStreamOctree, self).__init__(pf)
+ def __init__(self, ds):
+ self.fields = ds.stream_handler.fields
+ super(IOHandlerStreamOctree, self).__init__(ds)
def _read_fluid_selection(self, chunks, selector, fields, size):
rv = {}
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/stream/tests/test_stream_hexahedral.py
--- a/yt/frontends/stream/tests/test_stream_hexahedral.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/stream/tests/test_stream_hexahedral.py Sun Jun 15 19:50:51 2014 -0700
@@ -36,7 +36,7 @@
data = {'random_field': np.random.random(Nx*Ny*Nz)}
bbox = np.array([ [0.0, 1.0], [0.0, 1.0], [0.0, 1.0] ])
ds = load_hexahedral_mesh(data, conn, coords, bbox=bbox)
- dd = ds.h.all_data()
+ dd = ds.all_data()
#raise RuntimeError
yield assert_almost_equal, float(dd["cell_volume"].sum(dtype="float64")), 1.0
yield assert_equal, dd["ones"].size, Nx * Ny * Nz
@@ -48,7 +48,7 @@
data = {'random_field': np.random.random(Nx*Ny*Nz)}
bbox = np.array([ [0.0, 1.0], [0.0, 1.0], [0.0, 1.0] ])
ds = load_hexahedral_mesh(data, conn, coords, bbox=bbox)
- dd = ds.h.all_data()
+ dd = ds.all_data()
yield assert_almost_equal, float(dd["cell_volume"].sum(dtype="float64")), 1.0
yield assert_equal, dd["ones"].size, Nx * Ny * Nz
yield assert_almost_equal, dd["dx"].to_ndarray(), 1.0/Nx
diff -r f20d58ca2848 -r 67507b4f8da9 yt/frontends/stream/tests/test_update_data.py
--- a/yt/frontends/stream/tests/test_update_data.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/frontends/stream/tests/test_update_data.py Sun Jun 15 19:50:51 2014 -0700
@@ -3,15 +3,15 @@
from numpy.random import uniform
def test_update_data() :
- pf = fake_random_pf(64, nprocs=8)
- pf.h
+ ds = fake_random_ds(64, nprocs=8)
+ ds.index
dims = (32,32,32)
grid_data = [{"temperature":uniform(size=dims)}
- for i in xrange(pf.index.num_grids)]
- pf.index.update_data(grid_data, {'temperature':'K'})
- prj = pf.proj("temperature", 2)
+ for i in xrange(ds.index.num_grids)]
+ ds.index.update_data(grid_data, {'temperature':'K'})
+ prj = ds.proj("temperature", 2)
prj["temperature"]
- dd = pf.all_data()
+ dd = ds.all_data()
profile = BinnedProfile1D(dd, 10, "density",
dd["density"].min(),
dd["density"].max())
diff -r f20d58ca2848 -r 67507b4f8da9 yt/funcs.py
--- a/yt/funcs.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/funcs.py Sun Jun 15 19:50:51 2014 -0700
@@ -601,10 +601,10 @@
# Now we think we have our supplemental repository.
return supp_path
-def fix_length(length, pf=None):
- assert pf is not None
- if pf is not None:
- registry = pf.unit_registry
+def fix_length(length, ds=None):
+ assert ds is not None
+ if ds is not None:
+ registry = ds.unit_registry
else:
registry = None
if isinstance(length, YTArray):
@@ -656,8 +656,8 @@
return os.environ.get("OMP_NUM_THREADS", 0)
return nt
-def fix_axis(axis, pf):
- return pf.coordinates.axis_id.get(axis, axis)
+def fix_axis(axis, ds):
+ return ds.coordinates.axis_id.get(axis, axis)
def get_image_suffix(name):
suffix = os.path.splitext(name)[1]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/cartesian_coordinates.py
--- a/yt/geometry/cartesian_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/cartesian_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,8 +23,8 @@
class CartesianCoordinateHandler(CoordinateHandler):
- def __init__(self, pf):
- super(CartesianCoordinateHandler, self).__init__(pf)
+ def __init__(self, ds):
+ super(CartesianCoordinateHandler, self).__init__(ds)
def setup_fields(self, registry):
for axi, ax in enumerate('xyz'):
@@ -88,11 +88,11 @@
return coord
def convert_to_cylindrical(self, coord):
- center = self.pf.domain_center
+ center = self.ds.domain_center
return cartesian_to_cylindrical(coord, center)
def convert_from_cylindrical(self, coord):
- center = self.pf.domain_center
+ center = self.ds.domain_center
return cylindrical_to_cartesian(coord, center)
def convert_to_spherical(self, coord):
@@ -118,5 +118,5 @@
@property
def period(self):
- return self.pf.domain_width
+ return self.ds.domain_width
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/coordinate_handler.py
--- a/yt/geometry/coordinate_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/coordinate_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -34,17 +34,17 @@
def _get_coord_fields(axi, units = "code_length"):
def _dds(field, data):
- rv = data.pf.arr(data.fwidth[...,axi].copy(), units)
+ rv = data.ds.arr(data.fwidth[...,axi].copy(), units)
return data._reshape_vals(rv)
def _coords(field, data):
- rv = data.pf.arr(data.fcoords[...,axi].copy(), units)
+ rv = data.ds.arr(data.fcoords[...,axi].copy(), units)
return data._reshape_vals(rv)
return _dds, _coords
class CoordinateHandler(object):
- def __init__(self, pf):
- self.pf = weakref.proxy(pf)
+ def __init__(self, ds):
+ self.ds = weakref.proxy(ds)
def setup_fields(self):
# This should return field definitions for x, y, z, r, theta, phi
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/cylindrical_coordinates.py
--- a/yt/geometry/cylindrical_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/cylindrical_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -29,9 +29,9 @@
class CylindricalCoordinateHandler(CoordinateHandler):
- def __init__(self, pf, ordering = 'rzt'):
+ def __init__(self, ds, ordering = 'rzt'):
if ordering != 'rzt': raise NotImplementedError
- super(CylindricalCoordinateHandler, self).__init__(pf)
+ super(CylindricalCoordinateHandler, self).__init__(ds)
def setup_fields(self, registry):
# return the fields for r, z, theta
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/geographic_coordinates.py
--- a/yt/geometry/geographic_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/geographic_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,9 +25,9 @@
class GeographicCoordinateHandler(CoordinateHandler):
- def __init__(self, pf, ordering = 'latlonalt'):
+ def __init__(self, ds, ordering = 'latlonalt'):
if ordering != 'latlonalt': raise NotImplementedError
- super(GeographicCoordinateHandler, self).__init__(pf)
+ super(GeographicCoordinateHandler, self).__init__(ds)
def setup_fields(self, registry):
# return the fields for r, z, theta
@@ -79,7 +79,7 @@
def _altitude_to_radius(field, data):
surface_height = data.get_field_parameter("surface_height")
if surface_height is None:
- surface_height = getattr(data.pf, "surface_height", 0.0)
+ surface_height = getattr(data.ds, "surface_height", 0.0)
return data["altitude"] + surface_height
registry.add_field(("index", "r"),
function=_altitude_to_radius,
@@ -190,5 +190,5 @@
@property
def period(self):
- return self.pf.domain_width
+ return self.ds.domain_width
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/geometry_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -42,10 +42,10 @@
_unsupported_objects = ()
_index_properties = ()
- def __init__(self, pf, dataset_type):
+ def __init__(self, ds, dataset_type):
ParallelAnalysisInterface.__init__(self)
- self.parameter_file = weakref.proxy(pf)
- self.pf = self.parameter_file
+ self.dataset = weakref.proxy(ds)
+ self.ds = self.dataset
self._initialize_state_variables()
@@ -76,14 +76,14 @@
def _initialize_data_storage(self):
if not ytcfg.getboolean('yt','serialize'): return
- fn = self.pf.storage_filename
+ fn = self.ds.storage_filename
if fn is None:
if os.path.isfile(os.path.join(self.directory,
- "%s.yt" % self.pf.unique_identifier)):
- fn = os.path.join(self.directory,"%s.yt" % self.pf.unique_identifier)
+ "%s.yt" % self.ds.unique_identifier)):
+ fn = os.path.join(self.directory,"%s.yt" % self.ds.unique_identifier)
else:
fn = os.path.join(self.directory,
- "%s.yt" % self.parameter_file.basename)
+ "%s.yt" % self.dataset.basename)
dir_to_check = os.path.dirname(fn)
if dir_to_check == '':
dir_to_check = '.'
@@ -122,7 +122,7 @@
def _setup_data_io(self):
if getattr(self, "io", None) is not None: return
- self.io = io_registry[self.dataset_type](self.parameter_file)
+ self.io = io_registry[self.dataset_type](self.dataset)
@parallel_root_only
def save_data(self, array, node, name, set_attr=None, force=False, passthrough = False):
@@ -174,7 +174,7 @@
return
obj = cPickle.loads(obj.value)
if iterable(obj) and len(obj) == 2:
- obj = obj[1] # Just the object, not the pf
+ obj = obj[1] # Just the object, not the ds
if hasattr(obj, '_fix_pickle'): obj._fix_pickle()
return obj
@@ -233,7 +233,7 @@
fields_to_read)
for field in fields_to_read:
ftype, fname = field
- finfo = self.pf._get_field_info(*field)
+ finfo = self.ds._get_field_info(*field)
return fields_to_return, fields_to_generate
def _read_fluid_fields(self, fields, dobj, chunk = None):
@@ -317,7 +317,7 @@
def fcoords(self):
ci = np.empty((self.data_size, 3), dtype='float64')
ci = YTArray(ci, input_units = "code_length",
- registry = self.dobj.pf.unit_registry)
+ registry = self.dobj.ds.unit_registry)
if self.data_size == 0: return ci
ind = 0
for obj in self.objs:
@@ -343,7 +343,7 @@
def fwidth(self):
ci = np.empty((self.data_size, 3), dtype='float64')
ci = YTArray(ci, input_units = "code_length",
- registry = self.dobj.pf.unit_registry)
+ registry = self.dobj.ds.unit_registry)
if self.data_size == 0: return ci
ind = 0
for obj in self.objs:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/grid_geometry_handler.py
--- a/yt/geometry/grid_geometry_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/grid_geometry_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -69,13 +69,13 @@
@property
def parameters(self):
- return self.parameter_file.parameters
+ return self.dataset.parameters
def _detect_output_fields_backup(self):
# grab fields from backup file as well, if present
return
try:
- backup_filename = self.parameter_file.backup_filename
+ backup_filename = self.dataset.backup_filename
f = h5py.File(backup_filename, 'r')
g = f["data"]
grid = self.grids[0] # simply check one of the grids
@@ -101,9 +101,9 @@
def _initialize_grid_arrays(self):
mylog.debug("Allocating arrays for %s grids", self.num_grids)
self.grid_dimensions = np.ones((self.num_grids,3), 'int32')
- self.grid_left_edge = self.pf.arr(np.zeros((self.num_grids,3),
+ self.grid_left_edge = self.ds.arr(np.zeros((self.num_grids,3),
self.float_type), 'code_length')
- self.grid_right_edge = self.pf.arr(np.ones((self.num_grids,3),
+ self.grid_right_edge = self.ds.arr(np.ones((self.num_grids,3),
self.float_type), 'code_length')
self.grid_levels = np.zeros((self.num_grids,1), 'int32')
self.grid_particle_count = np.zeros((self.num_grids,1), 'int32')
@@ -163,7 +163,7 @@
mylog.info("Locking grids to parents.")
for i, g in enumerate(self.grids):
si = g.get_global_startindex()
- g.LeftEdge = self.pf.domain_left_edge + g.dds * si
+ g.LeftEdge = self.ds.domain_left_edge + g.dds * si
g.RightEdge = g.LeftEdge + g.ActiveDimensions * g.dds
self.grid_left_edge[i,:] = g.LeftEdge
self.grid_right_edge[i,:] = g.RightEdge
@@ -192,9 +192,9 @@
except:
pass
print "t = %0.8e = %0.8e s = %0.8e years" % \
- (self.pf.current_time.in_units("code_time"),
- self.pf.current_time.in_units("s"),
- self.pf.current_time.in_units("yr"))
+ (self.ds.current_time.in_units("code_time"),
+ self.ds.current_time.in_units("s"),
+ self.ds.current_time.in_units("yr"))
print "\nSmallest Cell:"
u=[]
for item in ("Mpc", "pc", "AU", "cm"):
@@ -242,9 +242,9 @@
def get_grid_tree(self) :
- left_edge = self.pf.arr(np.zeros((self.num_grids, 3)),
+ left_edge = self.ds.arr(np.zeros((self.num_grids, 3)),
'code_length')
- right_edge = self.pf.arr(np.zeros((self.num_grids, 3)),
+ right_edge = self.ds.arr(np.zeros((self.num_grids, 3)),
'code_length')
level = np.zeros((self.num_grids), dtype='int64')
parent_ind = np.zeros((self.num_grids), dtype='int64')
@@ -265,7 +265,7 @@
level, num_children)
def convert(self, unit):
- return self.parameter_file.conversion_factors[unit]
+ return self.dataset.conversion_factors[unit]
def _identify_base_chunk(self, dobj):
if dobj._type_name == "grid":
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/object_finding_mixin.py
--- a/yt/geometry/object_finding_mixin.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/object_finding_mixin.py Sun Jun 15 19:50:51 2014 -0700
@@ -38,8 +38,8 @@
# So if gRE > coord, we get a mask, if not, we get a zero
# if gLE > coord, we get a zero, if not, mask
# Thus, if the coordinate is between the two edges, we win!
- xax = self.pf.coordinates.x_axis[axis]
- yax = self.pf.coordinates.y_axis[axis]
+ xax = self.ds.coordinates.x_axis[axis]
+ yax = self.ds.coordinates.y_axis[axis]
np.choose(np.greater(self.grid_right_edge[:,xax],coord[0]),(0,mask),mask)
np.choose(np.greater(self.grid_left_edge[:,xax],coord[0]),(mask,0),mask)
np.choose(np.greater(self.grid_right_edge[:,yax],coord[1]),(0,mask),mask)
@@ -150,7 +150,7 @@
Examples
--------
- >>> pf.h.find_field_value_at_point(['Density', 'Temperature'],
+ >>> ds.h.find_field_value_at_point(['Density', 'Temperature'],
[0.4, 0.3, 0.8])
[2.1489e-24, 1.23843e4]
"""
@@ -192,8 +192,8 @@
centers = (self.grid_right_edge + self.grid_left_edge)/2.0
long_axis = np.maximum.reduce(self.grid_right_edge - self.grid_left_edge, 1)
t = np.abs(centers - center)
- DW = self.parameter_file.domain_right_edge \
- - self.parameter_file.domain_left_edge
+ DW = self.dataset.domain_right_edge \
+ - self.dataset.domain_left_edge
np.minimum(t, np.abs(DW-t), t)
dist = np.sqrt(np.sum((t**2.0), axis=1))
gridI = np.where(dist < (radius + long_axis))
@@ -211,8 +211,8 @@
def get_periodic_box_grids(self, left_edge, right_edge):
mask = np.zeros(self.grids.shape, dtype='bool')
- dl = self.parameter_file.domain_left_edge
- dr = self.parameter_file.domain_right_edge
+ dl = self.dataset.domain_left_edge
+ dr = self.dataset.domain_right_edge
left_edge = np.array(left_edge)
right_edge = np.array(right_edge)
dw = dr - dl
@@ -244,8 +244,8 @@
def get_periodic_box_grids_below_level(self, left_edge, right_edge, level,
min_level = 0):
mask = np.zeros(self.grids.shape, dtype='bool')
- dl = self.parameter_file.domain_left_edge
- dr = self.parameter_file.domain_right_edge
+ dl = self.dataset.domain_left_edge
+ dr = self.dataset.domain_right_edge
left_edge = np.array(left_edge)
right_edge = np.array(right_edge)
dw = dr - dl
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/oct_geometry_handler.py
--- a/yt/geometry/oct_geometry_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/oct_geometry_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -44,11 +44,11 @@
"""
Returns (in code units) the smallest cell size in the simulation.
"""
- return (self.parameter_file.domain_width /
+ return (self.dataset.domain_width /
(2**(self.max_level+1))).min()
def convert(self, unit):
- return self.parameter_file.conversion_factors[unit]
+ return self.dataset.conversion_factors[unit]
def find_max(self, field, finest_levels = 3):
"""
@@ -69,7 +69,7 @@
source.quantities["MaxLocation"](field)
mylog.info("Max Value is %0.5e at %0.16f %0.16f %0.16f",
max_val, mx, my, mz)
- self.pf.parameters["Max%sValue" % (field,)] = max_val
- self.pf.parameters["Max%sPos" % (field,)] = "%s" % ((mx,my,mz),)
+ self.ds.parameters["Max%sValue" % (field,)] = max_val
+ self.ds.parameters["Max%sPos" % (field,)] = "%s" % ((mx,my,mz),)
return max_val, np.array((mx,my,mz), dtype='float64')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/particle_geometry_handler.py
--- a/yt/geometry/particle_geometry_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/particle_geometry_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -40,14 +40,13 @@
class ParticleIndex(Index):
_global_mesh = False
- def __init__(self, pf, dataset_type):
+ def __init__(self, ds, dataset_type):
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
self.float_type = np.float64
- super(ParticleIndex, self).__init__(pf, dataset_type)
+ super(ParticleIndex, self).__init__(ds, dataset_type)
def _setup_geometry(self):
mylog.debug("Initializing Particle Geometry Handler.")
@@ -59,32 +58,32 @@
Returns (in code units) the smallest cell size in the simulation.
"""
dx = 1.0/(2**self.oct_handler.max_level)
- dx *= (self.parameter_file.domain_right_edge -
- self.parameter_file.domain_left_edge)
+ dx *= (self.dataset.domain_right_edge -
+ self.dataset.domain_left_edge)
return dx.min()
def convert(self, unit):
- return self.parameter_file.conversion_factors[unit]
+ return self.dataset.conversion_factors[unit]
def _initialize_particle_handler(self):
self._setup_data_io()
- template = self.parameter_file.filename_template
- ndoms = self.parameter_file.file_count
- cls = self.parameter_file._file_class
- self.data_files = [cls(self.parameter_file, self.io, template % {'num':i}, i)
+ template = self.dataset.filename_template
+ ndoms = self.dataset.file_count
+ cls = self.dataset._file_class
+ self.data_files = [cls(self.dataset, self.io, template % {'num':i}, i)
for i in range(ndoms)]
self.total_particles = sum(
sum(d.total_particles.values()) for d in self.data_files)
- pf = self.parameter_file
+ ds = self.dataset
self.oct_handler = ParticleOctreeContainer(
- [1, 1, 1], pf.domain_left_edge, pf.domain_right_edge,
- over_refine = pf.over_refine_factor)
- self.oct_handler.n_ref = pf.n_ref
+ [1, 1, 1], ds.domain_left_edge, ds.domain_right_edge,
+ over_refine = ds.over_refine_factor)
+ self.oct_handler.n_ref = ds.n_ref
mylog.info("Allocating for %0.3e particles", self.total_particles)
# No more than 256^3 in the region finder.
N = min(len(self.data_files), 256)
self.regions = ParticleRegions(
- pf.domain_left_edge, pf.domain_right_edge,
+ ds.domain_left_edge, ds.domain_right_edge,
[N, N, N], len(self.data_files))
self._initialize_indices()
self.oct_handler.finalize()
@@ -116,21 +115,21 @@
def _detect_output_fields(self):
# TODO: Add additional fields
- pfl = []
+ dsl = []
units = {}
for dom in self.data_files:
fl, _units = self.io._identify_fields(dom)
units.update(_units)
dom._calculate_offsets(fl)
for f in fl:
- if f not in pfl: pfl.append(f)
- self.field_list = pfl
- pf = self.parameter_file
- pf.particle_types = tuple(set(pt for pt, pf in pfl))
+ if f not in dsl: dsl.append(f)
+ self.field_list = dsl
+ ds = self.dataset
+ ds.particle_types = tuple(set(pt for pt, ds in dsl))
# This is an attribute that means these particle types *actually*
# exist. As in, they are real, in the dataset.
- pf.field_units.update(units)
- pf.particle_types_raw = pf.particle_types
+ ds.field_units.update(units)
+ ds.particle_types_raw = ds.particle_types
def _identify_base_chunk(self, dobj):
if getattr(dobj, "_chunk_info", None) is None:
@@ -139,9 +138,9 @@
data_files = [self.data_files[i] for i in
self.regions.identify_data_files(dobj.selector)]
base_region = getattr(dobj, "base_region", dobj)
- oref = self.parameter_file.over_refine_factor
+ oref = self.dataset.over_refine_factor
subset = [ParticleOctreeSubset(base_region, data_files,
- self.parameter_file, over_refine_factor = oref)]
+ self.dataset, over_refine_factor = oref)]
dobj._chunk_info = subset
dobj._current_chunk = list(self._chunk_all(dobj))[0]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/polar_coordinates.py
--- a/yt/geometry/polar_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/polar_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -24,9 +24,9 @@
class PolarCoordinateHandler(CoordinateHandler):
- def __init__(self, pf, ordering = 'rtz'):
+ def __init__(self, ds, ordering = 'rtz'):
if ordering != 'rtz': raise NotImplementedError
- super(PolarCoordinateHandler, self).__init__(pf)
+ super(PolarCoordinateHandler, self).__init__(ds)
def setup_fields(self, registry):
# return the fields for r, z, theta
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/selection_routines.pyx Sun Jun 15 19:50:51 2014 -0700
@@ -117,18 +117,18 @@
self.overlap_cells = 0
for i in range(3) :
- pf = getattr(dobj, 'pf', None)
- if pf is None:
+ ds = getattr(dobj, 'ds', None)
+ if ds is None:
for i in range(3):
# NOTE that this is not universal.
self.domain_width[i] = 1.0
self.periodicity[i] = False
else:
- DLE = _ensure_code(pf.domain_left_edge)
- DRE = _ensure_code(pf.domain_right_edge)
+ DLE = _ensure_code(ds.domain_left_edge)
+ DRE = _ensure_code(ds.domain_right_edge)
for i in range(3):
self.domain_width[i] = DRE[i] - DLE[i]
- self.periodicity[i] = pf.periodicity[i]
+ self.periodicity[i] = ds.periodicity[i]
@cython.boundscheck(False)
@cython.wraparound(False)
@@ -620,7 +620,7 @@
# do an in-place conversion of those arrays.
_ensure_code(dobj.right_edge)
_ensure_code(dobj.left_edge)
- DW = _ensure_code(dobj.pf.domain_width.copy())
+ DW = _ensure_code(dobj.ds.domain_width.copy())
for i in range(3):
region_width = dobj.right_edge[i] - dobj.left_edge[i]
@@ -631,23 +631,23 @@
"Region right edge < left edge: width = %s" % region_width
)
- if dobj.pf.periodicity[i]:
+ if dobj.ds.periodicity[i]:
# shift so left_edge guaranteed in domain
- if dobj.left_edge[i] < dobj.pf.domain_left_edge[i]:
+ if dobj.left_edge[i] < dobj.ds.domain_left_edge[i]:
dobj.left_edge[i] += domain_width
dobj.right_edge[i] += domain_width
- elif dobj.left_edge[i] > dobj.pf.domain_right_edge[i]:
+ elif dobj.left_edge[i] > dobj.ds.domain_right_edge[i]:
dobj.left_edge[i] += domain_width
dobj.right_edge[i] += domain_width
else:
- if dobj.left_edge[i] < dobj.pf.domain_left_edge[i] or \
- dobj.right_edge[i] > dobj.pf.domain_right_edge[i]:
+ if dobj.left_edge[i] < dobj.ds.domain_left_edge[i] or \
+ dobj.right_edge[i] > dobj.ds.domain_right_edge[i]:
raise RuntimeError(
"Error: bad Region in non-periodic domain along dimension %s. "
"Region left edge = %s, Region right edge = %s"
"Dataset left edge = %s, Dataset right edge = %s" % \
(i, dobj.left_edge[i], dobj.right_edge[i],
- dobj.pf.domain_left_edge[i], dobj.pf.domain_right_edge[i])
+ dobj.ds.domain_left_edge[i], dobj.ds.domain_right_edge[i])
)
# Already ensured in code
self.left_edge[i] = dobj.left_edge[i]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/spec_cube_coordinates.py
--- a/yt/geometry/spec_cube_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/spec_cube_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,14 +20,14 @@
class SpectralCubeCoordinateHandler(CartesianCoordinateHandler):
- def __init__(self, pf):
- super(SpectralCubeCoordinateHandler, self).__init__(pf)
+ def __init__(self, ds):
+ super(SpectralCubeCoordinateHandler, self).__init__(ds)
self.axis_name = {}
self.axis_id = {}
- for axis, axis_name in zip([pf.lon_axis, pf.lat_axis, pf.spec_axis],
- ["Image\ x", "Image\ y", pf.spec_name]):
+ for axis, axis_name in zip([ds.lon_axis, ds.lat_axis, ds.spec_axis],
+ ["Image\ x", "Image\ y", ds.spec_name]):
lower_ax = "xyz"[axis]
upper_ax = lower_ax.upper()
@@ -41,16 +41,16 @@
self.axis_id[axis_name] = axis
self.default_unit_label = {}
- self.default_unit_label[pf.lon_axis] = "pixel"
- self.default_unit_label[pf.lat_axis] = "pixel"
- self.default_unit_label[pf.spec_axis] = pf.spec_unit
+ self.default_unit_label[ds.lon_axis] = "pixel"
+ self.default_unit_label[ds.lat_axis] = "pixel"
+ self.default_unit_label[ds.spec_axis] = ds.spec_unit
def _spec_axis(ax, x, y):
p = (x,y)[ax]
- return [self.pf.pixel2spec(pp).v for pp in p]
+ return [self.ds.pixel2spec(pp).v for pp in p]
self.axis_field = {}
- self.axis_field[self.pf.spec_axis] = _spec_axis
+ self.axis_field[self.ds.spec_axis] = _spec_axis
def convert_to_cylindrical(self, coord):
raise NotImplementedError
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/spherical_coordinates.py
--- a/yt/geometry/spherical_coordinates.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/spherical_coordinates.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,9 +25,9 @@
class SphericalCoordinateHandler(CoordinateHandler):
- def __init__(self, pf, ordering = 'rtp'):
+ def __init__(self, ds, ordering = 'rtp'):
if ordering != 'rtp': raise NotImplementedError
- super(SphericalCoordinateHandler, self).__init__(pf)
+ super(SphericalCoordinateHandler, self).__init__(ds)
def setup_fields(self, registry):
# return the fields for r, z, theta
@@ -153,5 +153,5 @@
@property
def period(self):
- return self.pf.domain_width
+ return self.ds.domain_width
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/tests/test_particle_octree.py
--- a/yt/geometry/tests/test_particle_octree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/tests/test_particle_octree.py Sun Jun 15 19:50:51 2014 -0700
@@ -88,8 +88,8 @@
bbox.append( [DLE[i], DRE[i]] )
bbox = np.array(bbox)
for n_ref in [16, 32, 64, 512, 1024]:
- pf = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
- dd = pf.h.all_data()
+ ds = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
+ dd = ds.all_data()
bi = dd["io","mesh_id"]
v = np.bincount(bi.astype("int64"))
yield assert_equal, v.max() <= n_ref, True
@@ -110,22 +110,22 @@
bbox = np.array(bbox)
_attrs = ('icoords', 'fcoords', 'fwidth', 'ires')
for n_ref in [16, 32, 64, 512, 1024]:
- pf1 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
- dd1 = pf1.h.all_data()
+ ds1 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
+ dd1 = ds1.h.all_data()
v1 = dict((a, getattr(dd1, a)) for a in _attrs)
cv1 = dd1["cell_volume"].sum(dtype="float64")
for over_refine in [1, 2, 3]:
f = 1 << (3*(over_refine-1))
- pf2 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref,
+ ds2 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref,
over_refine_factor = over_refine)
- dd2 = pf2.h.all_data()
+ dd2 = ds2.h.all_data()
v2 = dict((a, getattr(dd2, a)) for a in _attrs)
for a in sorted(v1):
yield assert_equal, v1[a].size * f, v2[a].size
cv2 = dd2["cell_volume"].sum(dtype="float64")
yield assert_equal, cv1, cv2
-class FakePF:
+class FakeDS:
domain_left_edge = None
domain_right_edge = None
domain_width = None
@@ -135,20 +135,20 @@
class FakeRegion:
def __init__(self, nfiles):
- self.pf = FakePF()
- self.pf.domain_left_edge = YTArray([0.0, 0.0, 0.0], "code_length",
- registry=self.pf.unit_registry)
- self.pf.domain_right_edge = YTArray([nfiles, nfiles, nfiles], "code_length",
- registry=self.pf.unit_registry)
- self.pf.domain_width = self.pf.domain_right_edge - \
- self.pf.domain_left_edge
+ self.ds = FakeDS()
+ self.ds.domain_left_edge = YTArray([0.0, 0.0, 0.0], "code_length",
+ registry=self.ds.unit_registry)
+ self.ds.domain_right_edge = YTArray([nfiles, nfiles, nfiles], "code_length",
+ registry=self.ds.unit_registry)
+ self.ds.domain_width = self.ds.domain_right_edge - \
+ self.ds.domain_left_edge
self.nfiles = nfiles
def set_edges(self, file_id):
self.left_edge = YTArray([file_id + 0.1, 0.0, 0.0],
- 'code_length', registry=self.pf.unit_registry)
+ 'code_length', registry=self.ds.unit_registry)
self.right_edge = YTArray([file_id+1 - 0.1, self.nfiles, self.nfiles],
- 'code_length', registry=self.pf.unit_registry)
+ 'code_length', registry=self.ds.unit_registry)
def test_particle_regions():
np.random.seed(int(0x4d3d3d3))
diff -r f20d58ca2848 -r 67507b4f8da9 yt/geometry/unstructured_mesh_handler.py
--- a/yt/geometry/unstructured_mesh_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/geometry/unstructured_mesh_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -25,14 +25,13 @@
_global_mesh = False
_unsupported_objects = ('proj', 'covering_grid', 'smoothed_covering_grid')
- def __init__(self, pf, dataset_type):
+ def __init__(self, ds, dataset_type):
self.dataset_type = dataset_type
- self.parameter_file = weakref.proxy(pf)
- # for now, the index file is the parameter file!
- self.index_filename = self.parameter_file.parameter_filename
+ self.dataset = weakref.proxy(ds)
+ self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)
self.float_type = np.float64
- super(UnstructuredIndex, self).__init__(pf, dataset_type)
+ super(UnstructuredIndex, self).__init__(ds, dataset_type)
def _setup_geometry(self):
mylog.debug("Initializing Unstructured Mesh Geometry Handler.")
@@ -49,7 +48,7 @@
return dx
def convert(self, unit):
- return self.parameter_file.conversion_factors[unit]
+ return self.dataset.conversion_factors[unit]
def _initialize_mesh(self):
raise NotImplementedError
diff -r f20d58ca2848 -r 67507b4f8da9 yt/gui/reason/extdirect_repl.py
--- a/yt/gui/reason/extdirect_repl.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/gui/reason/extdirect_repl.py Sun Jun 15 19:50:51 2014 -0700
@@ -189,7 +189,7 @@
from yt.mods import *
from yt.gui.reason.utils import load_script, deliver_image
from yt.gui.reason.widget_store import WidgetStore
-from yt.data_objects.static_output import _cached_pfs
+from yt.data_objects.static_output import _cached_datasets
pylab.ion()
data_objects = []
@@ -420,7 +420,7 @@
@lockit
def load(self, base_dir, filename):
pp = os.path.join(base_dir, filename)
- funccall = "pfs.append(load('%s'))" % pp
+ funccall = "datasets.append(load('%s'))" % pp
self.execute(funccall)
return []
diff -r f20d58ca2848 -r 67507b4f8da9 yt/gui/reason/pannable_map.py
--- a/yt/gui/reason/pannable_map.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/gui/reason/pannable_map.py Sun Jun 15 19:50:51 2014 -0700
@@ -44,7 +44,7 @@
reasonjs_file = None
def __init__(self, data, field, route_prefix = ""):
self.data = data
- self.pf = data.pf
+ self.ds = data.ds
self.field = field
bottle.route("%s/map/:L/:x/:y.png" % route_prefix)(self.map)
@@ -67,22 +67,22 @@
dd = 1.0 / (2.0**(int(L)))
relx = int(x) * dd
rely = int(y) * dd
- DW = (self.pf.domain_right_edge - self.pf.domain_left_edge)
- xl = self.pf.domain_left_edge[0] + relx * DW[0]
- yl = self.pf.domain_left_edge[1] + rely * DW[1]
+ DW = (self.ds.domain_right_edge - self.ds.domain_left_edge)
+ xl = self.ds.domain_left_edge[0] + relx * DW[0]
+ yl = self.ds.domain_left_edge[1] + rely * DW[1]
xr = xl + dd*DW[0]
yr = yl + dd*DW[1]
frb = FixedResolutionBuffer(self.data, (xl, xr, yl, yr), (256, 256))
cmi, cma = get_color_bounds(self.data['px'], self.data['py'],
self.data['pdx'], self.data['pdy'],
self.data[self.field],
- self.pf.domain_left_edge[0],
- self.pf.domain_right_edge[0],
- self.pf.domain_left_edge[1],
- self.pf.domain_right_edge[1],
+ self.ds.domain_left_edge[0],
+ self.ds.domain_right_edge[0],
+ self.ds.domain_left_edge[1],
+ self.ds.domain_right_edge[1],
dd*DW[0] / (64*256),
dd*DW[0])
- if self.pf.field_info[self.field].take_log:
+ if self.ds.field_info[self.field].take_log:
cmi = np.log10(cmi)
cma = np.log10(cma)
to_plot = apply_colormap(np.log10(frb[self.field]), color_bounds = (cmi, cma))
diff -r f20d58ca2848 -r 67507b4f8da9 yt/gui/reason/utils.py
--- a/yt/gui/reason/utils.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/gui/reason/utils.py Sun Jun 15 19:50:51 2014 -0700
@@ -49,27 +49,27 @@
def get_list_of_datasets():
# Note that this instantiates the index. This can be a costly
# event. However, we're going to assume that it's okay, if you have
- # decided to load up the parameter file.
- from yt.data_objects.static_output import _cached_pfs
+ # decided to load up the dataset.
+ from yt.data_objects.static_output import _cached_datasets
rv = []
- for fn, pf in sorted(_cached_pfs.items()):
+ for fn, ds in sorted(_cached_datasets.items()):
objs = []
- pf_varname = "_cached_pfs['%s']" % (fn)
+ ds_varname = "_cached_datasets['%s']" % (fn)
field_list = []
- if pf._instantiated_index is not None:
- field_list = list(set(pf.field_list + pf.derived_field_list))
+ if ds._instantiated_index is not None:
+ field_list = list(set(ds.field_list + ds.derived_field_list))
field_list = [dict(text = f) for f in sorted(field_list)]
- for i,obj in enumerate(pf.h.objects):
+ for i,obj in enumerate(ds.h.objects):
try:
name = str(obj)
except ReferenceError:
continue
objs.append(dict(name=name, type=obj._type_name,
filename = '', field_list = [],
- varname = "%s.h.objects[%s]" % (pf_varname, i)))
- rv.append( dict(name = str(pf), children = objs, filename=fn,
- type = "parameter_file",
- varname = pf_varname, field_list = field_list) )
+ varname = "%s.h.objects[%s]" % (ds_varname, i)))
+ rv.append( dict(name = str(ds), children = objs, filename=fn,
+ type = "dataset",
+ varname = ds_varname, field_list = field_list) )
return rv
def get_reasonjs_path():
diff -r f20d58ca2848 -r 67507b4f8da9 yt/gui/reason/widget_builders.py
--- a/yt/gui/reason/widget_builders.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/gui/reason/widget_builders.py Sun Jun 15 19:50:51 2014 -0700
@@ -19,12 +19,12 @@
_camera = None
_tf = None
- def __init__(self, pf, camera=None, tf=None):
- self.pf = weakref.proxy(pf)
+ def __init__(self, ds, camera=None, tf=None):
+ self.ds = weakref.proxy(ds)
self._camera = camera
self._tf = tf
- self.center = self.pf.domain_center
+ self.center = self.ds.domain_center
self.normal_vector = np.array([0.7,1.0,0.3])
self.north_vector = [0.,0.,1.]
self.steady_north = True
@@ -34,14 +34,14 @@
self.mi = None
self.ma = None
if self._tf is None:
- self._new_tf(self.pf)
+ self._new_tf(self.ds)
if self._camera is None:
- self._new_camera(self.pf)
+ self._new_camera(self.ds)
- def _new_tf(self, pf, mi=None, ma=None, nbins=1024):
+ def _new_tf(self, ds, mi=None, ma=None, nbins=1024):
if mi is None or ma is None:
- roi = self.pf.region(self.center, self.center-self.width, self.center+self.width)
+ roi = self.ds.region(self.center, self.center-self.width, self.center+self.width)
self.mi, self.ma = roi.quantities['Extrema'](self.fields[0])[0]
if self.log_fields[0]:
self.mi, self.ma = np.log10(self.mi), np.log10(self.ma)
@@ -53,9 +53,9 @@
col_bounds = (self.mi,self.ma),
colormap=colormap)
- def _new_camera(self, pf):
+ def _new_camera(self, ds):
del self._camera
- self._camera = self.pf.camera(self.center, self.normal_vector,
+ self._camera = self.ds.camera(self.center, self.normal_vector,
self.width, self.resolution, self._tf,
north_vector=self.north_vector,
steady_north=self.steady_north,
@@ -65,16 +65,16 @@
return self._camera.snapshot()
-def get_corners(pf, max_level=None):
- DL = pf.domain_left_edge[None,:,None]
- DW = pf.domain_width[None,:,None]/100.0
- corners = ((pf.grid_corners-DL)/DW)
- levels = pf.grid_levels
+def get_corners(ds, max_level=None):
+ DL = ds.domain_left_edge[None,:,None]
+ DW = ds.domain_width[None,:,None]/100.0
+ corners = ((ds.grid_corners-DL)/DW)
+ levels = ds.grid_levels
return corners, levels
-def get_isocontour(pf, field, value=None, rel_val = False):
+def get_isocontour(ds, field, value=None, rel_val = False):
- dd = pf.h.all_data()
+ dd = ds.h.all_data()
if value is None or rel_val:
if value is None: value = 0.5
mi, ma = np.log10(dd.quantities["Extrema"]("Density")[0])
@@ -83,9 +83,9 @@
np.multiply(vert, 100, vert)
return vert
-def get_streamlines(pf):
+def get_streamlines(ds):
from yt.visualization.api import Streamlines
- streamlines = Streamlines(pf, pf.domain_center)
+ streamlines = Streamlines(ds, ds.domain_center)
streamlines.integrate_through_volume()
stream = streamlines.path(0)
matplotlib.pylab.semilogy(stream['t'], stream['Density'], '-x')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/gui/reason/widget_store.py
--- a/yt/gui/reason/widget_store.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/gui/reason/widget_store.py Sun Jun 15 19:50:51 2014 -0700
@@ -56,58 +56,58 @@
multicast_session, multicast_token)
mylog.info("Multicasting %s to %s", widget_id, multicast_session)
- def create_slice(self, pf, center, axis, field, onmax):
+ def create_slice(self, ds, center, axis, field, onmax):
if onmax:
- center = pf.h.find_max('Density')[1]
+ center = ds.h.find_max('Density')[1]
else:
center = np.array(center)
- axis = pf.coordinates.axis_id[axis.lower()]
+ axis = ds.coordinates.axis_id[axis.lower()]
coord = center[axis]
- sl = pf.slice(axis, coord, center = center)
- xax = pf.coordinates.x_axis[axis]
- yax = pf.coordinates.y_axis[axis]
- DLE, DRE = pf.domain_left_edge, pf.domain_right_edge
+ sl = ds.slice(axis, coord, center = center)
+ xax = ds.coordinates.x_axis[axis]
+ yax = ds.coordinates.y_axis[axis]
+ DLE, DRE = ds.domain_left_edge, ds.domain_right_edge
pw = PWViewerExtJS(sl, (DLE[xax], DRE[xax], DLE[yax], DRE[yax]),
setup = False, plot_type='SlicePlot')
pw.set_current_field(field)
- field_list = list(set(pf.field_list + pf.derived_field_list))
+ field_list = list(set(ds.field_list + ds.derived_field_list))
field_list = [dict(text = f) for f in sorted(field_list)]
cb = pw._get_cbar_image()
trans = pw._field_transform[pw._current_field].name
widget_data = {'fields': field_list,
'initial_field': field,
- 'title': "%s Slice" % (pf),
+ 'title': "%s Slice" % (ds),
'colorbar': cb,
'initial_transform' : trans}
self._add_widget(pw, widget_data)
- def create_proj(self, pf, axis, field, weight):
+ def create_proj(self, ds, axis, field, weight):
if weight == "None": weight = None
- axis = pf.coordinates.axis_id[axis.lower()]
- proj = pf.proj(field, axis, weight_field=weight)
- xax = pf.coordinates.x_axis[axis]
- yax = pf.coordinates.y_axis[axis]
- DLE, DRE = pf.domain_left_edge, pf.domain_right_edge
+ axis = ds.coordinates.axis_id[axis.lower()]
+ proj = ds.proj(field, axis, weight_field=weight)
+ xax = ds.coordinates.x_axis[axis]
+ yax = ds.coordinates.y_axis[axis]
+ DLE, DRE = ds.domain_left_edge, ds.domain_right_edge
pw = PWViewerExtJS(proj, (DLE[xax], DRE[xax], DLE[yax], DRE[yax]),
setup = False, plot_type='ProjectionPlot')
pw.set_current_field(field)
- field_list = list(set(pf.field_list + pf.derived_field_list))
+ field_list = list(set(ds.field_list + ds.derived_field_list))
field_list = [dict(text = f) for f in sorted(field_list)]
cb = pw._get_cbar_image()
widget_data = {'fields': field_list,
'initial_field': field,
- 'title': "%s Projection" % (pf),
+ 'title': "%s Projection" % (ds),
'colorbar': cb}
self._add_widget(pw, widget_data)
- def create_grid_dataview(self, pf):
- levels = pf.grid_levels
- left_edge = pf.grid_left_edge
- right_edge = pf.grid_right_edge
- dimensions = pf.grid_dimensions
- cell_counts = pf.grid_dimensions.prod(axis=1)
+ def create_grid_dataview(self, ds):
+ levels = ds.grid_levels
+ left_edge = ds.grid_left_edge
+ right_edge = ds.grid_right_edge
+ dimensions = ds.grid_dimensions
+ cell_counts = ds.grid_dimensions.prod(axis=1)
# This is annoying, and not ... that happy for memory.
- i = pf.index.grids[0]._id_offset
+ i = ds.index.grids[0]._id_offset
vals = []
for i, (L, LE, RE, dim, cell) in enumerate(zip(
levels, left_edge, right_edge, dimensions, cell_counts)):
@@ -125,11 +125,11 @@
}
self.payload_handler.add_payload(payload)
- def create_pf_display(self, pf):
- widget = ParameterFileWidget(pf)
+ def create_ds_display(self, ds):
+ widget = ParameterFileWidget(ds)
widget_data = {'fields': widget._field_list(),
'level_stats': widget._level_stats(),
- 'pf_info': widget._pf_info(),
+ 'ds_info': widget._ds_info(),
}
self._add_widget(widget, widget_data)
@@ -154,13 +154,13 @@
'metadata_string': mds}
self._add_widget(pp, widget_data)
- def create_scene(self, pf):
+ def create_scene(self, ds):
'''Creates 3D XTK-based scene'''
- widget = SceneWidget(pf)
- field_list = list(set(pf.field_list
- + pf.derived_field_list))
+ widget = SceneWidget(ds)
+ field_list = list(set(ds.field_list
+ + ds.derived_field_list))
field_list.sort()
- widget_data = {'title':'Scene for %s' % pf,
+ widget_data = {'title':'Scene for %s' % ds,
'fields': field_list}
self._add_widget(widget, widget_data)
@@ -169,21 +169,21 @@
_ext_widget_id = None
_widget_name = "parameterfile"
- def __init__(self, pf):
- self.pf = weakref.proxy(pf)
+ def __init__(self, ds):
+ self.ds = weakref.proxy(ds)
def _field_list(self):
- field_list = list(set(self.pf.field_list
- + self.pf.derived_field_list))
+ field_list = list(set(self.ds.field_list
+ + self.ds.derived_field_list))
field_list.sort()
return [dict(text = field) for field in field_list]
def _level_stats(self):
level_data = []
- level_stats = self.pf.h.level_stats
+ level_stats = self.ds.h.level_stats
ngrids = float(level_stats['numgrids'].sum())
ncells = float(level_stats['numcells'].sum())
- for level in range(self.pf.h.max_level + 1):
+ for level in range(self.ds.h.max_level + 1):
cell_count = level_stats['numcells'][level]
grid_count = level_stats['numgrids'][level]
level_data.append({'level' : level,
@@ -193,9 +193,9 @@
'grid_rel': int(100*grid_count/ngrids)})
return level_data
- def _pf_info(self):
+ def _ds_info(self):
tr = {}
- for k, v in self.pf._mrep._attrs.items():
+ for k, v in self.ds._mrep._attrs.items():
if isinstance(v, np.ndarray):
tr[k] = v.tolist()
else:
@@ -206,7 +206,7 @@
ph = PayloadHandler()
ph.widget_payload(self,
{'ptype':'field_info',
- 'field_source': self.pf.field_info[field].get_source() })
+ 'field_source': self.ds.field_info[field].get_source() })
return
class SceneWidget(object):
@@ -214,12 +214,12 @@
_widget_name = "scene"
_rendering_scene = None
- def __init__(self, pf):
- self.pf = weakref.proxy(pf)
+ def __init__(self, ds):
+ self.ds = weakref.proxy(ds)
def add_volume_rendering(self):
return None
- self._rendering_scene = RenderingScene(self.pf, None, None)
+ self._rendering_scene = RenderingScene(self.ds, None, None)
def deliver_rendering(self, scene_config):
ph = PayloadHandler()
@@ -229,7 +229,7 @@
def deliver_isocontour(self, field, value, rel_val = False):
ph = PayloadHandler()
- vert = get_isocontour(self.pf, field, value, rel_val)
+ vert = get_isocontour(self.ds, field, value, rel_val)
normals = np.empty(vert.shape)
for i in xrange(vert.shape[0]/3):
n = np.cross(vert[i*3,:], vert[i*3+1,:])
@@ -241,12 +241,12 @@
def deliver_gridlines(self):
ph = PayloadHandler()
- corners, levels = get_corners(self.pf)
+ corners, levels = get_corners(self.ds)
ph.widget_payload(self, {'ptype':'grid_lines',
'binary': ['corners', 'levels'],
'corners': corners,
'levels': levels,
- 'max_level': int(self.pf.h.max_level)})
+ 'max_level': int(self.ds.h.max_level)})
return
def render_path(self, views, times, N):
@@ -294,6 +294,6 @@
def deliver_streamlines(self):
- pf = PayloadHandler()
+ ds = PayloadHandler()
pass
diff -r f20d58ca2848 -r 67507b4f8da9 yt/testing.py
--- a/yt/testing.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/testing.py Sun Jun 15 19:50:51 2014 -0700
@@ -141,7 +141,7 @@
return left, right, level
-def fake_random_pf(
+def fake_random_ds(
ndims, peak_value = 1.0,
fields = ("density", "velocity_x", "velocity_y", "velocity_z"),
units = ('g/cm**3', 'cm/s', 'cm/s', 'cm/s'),
@@ -178,7 +178,7 @@
ug = load_uniform_grid(data, ndims, length_unit=length_unit, nprocs=nprocs)
return ug
-def fake_amr_pf(fields = ("Density",)):
+def fake_amr_ds(fields = ("Density",)):
from yt.frontends.stream.api import load_amr_grids
data = []
for gspec in _amr_grid_index:
@@ -556,16 +556,16 @@
--------
@check_results
- def my_func(pf):
- return pf.domain_width
+ def my_func(ds):
+ return ds.domain_width
- my_func(pf)
+ my_func(ds)
@check_results
def field_checker(dd, field_name):
return dd[field_name]
- field_cheker(pf.h.all_data(), 'density', result_basename='density')
+ field_cheker(ds.all_data(), 'density', result_basename='density')
"""
def compute_results(func):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/units/tests/test_units.py
--- a/yt/units/tests/test_units.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/units/tests/test_units.py Sun Jun 15 19:50:51 2014 -0700
@@ -21,7 +21,7 @@
assert_allclose, assert_raises
from nose.tools import assert_true
from sympy import Symbol
-from yt.testing import fake_random_pf
+from yt.testing import fake_random_ds
# dimensions
from yt.units.dimensions import \
@@ -421,12 +421,12 @@
Msun_cgs / Mpc_cgs**3, 1e-12
def test_is_code_unit():
- pf = fake_random_pf(64, nprocs=1)
- u1 = Unit('code_mass', registry=pf.unit_registry)
- u2 = Unit('code_mass/code_length', registry=pf.unit_registry)
- u3 = Unit('code_velocity*code_mass**2', registry=pf.unit_registry)
- u4 = Unit('code_time*code_mass**0.5', registry=pf.unit_registry)
- u5 = Unit('code_mass*g', registry=pf.unit_registry)
+ ds = fake_random_ds(64, nprocs=1)
+ u1 = Unit('code_mass', registry=ds.unit_registry)
+ u2 = Unit('code_mass/code_length', registry=ds.unit_registry)
+ u3 = Unit('code_velocity*code_mass**2', registry=ds.unit_registry)
+ u4 = Unit('code_time*code_mass**0.5', registry=ds.unit_registry)
+ u5 = Unit('code_mass*g', registry=ds.unit_registry)
u6 = Unit('g/cm**3')
yield assert_true, u1.is_code_unit
diff -r f20d58ca2848 -r 67507b4f8da9 yt/units/tests/test_ytarray.py
--- a/yt/units/tests/test_ytarray.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/units/tests/test_ytarray.py Sun Jun 15 19:50:51 2014 -0700
@@ -34,7 +34,7 @@
unary_operators, binary_operators
from yt.utilities.exceptions import \
YTUnitOperationError, YTUfuncUnitError
-from yt.testing import fake_random_pf, requires_module
+from yt.testing import fake_random_ds, requires_module
from yt.funcs import fix_length
from yt.units.unit_symbols import \
cm, m, g
@@ -505,16 +505,16 @@
"""
Test fixing the length of an array. Used in spheres and other data objects
"""
- pf = fake_random_pf(64, nprocs=1, length_unit=10)
- length = pf.quan(1.0, 'code_length')
- new_length = fix_length(length, pf=pf)
+ ds = fake_random_ds(64, nprocs=1, length_unit=10)
+ length = ds.quan(1.0, 'code_length')
+ new_length = fix_length(length, ds=ds)
yield assert_equal, YTQuantity(10, 'cm'), new_length
def test_ytarray_pickle():
- pf = fake_random_pf(64, nprocs=1)
- test_data = [pf.quan(12.0, 'code_length'),
- pf.arr([1, 2, 3], 'code_length')]
+ ds = fake_random_ds(64, nprocs=1)
+ test_data = [ds.quan(12.0, 'code_length'),
+ ds.arr([1, 2, 3], 'code_length')]
for data in test_data:
tempf = tempfile.NamedTemporaryFile(delete=False)
@@ -700,7 +700,7 @@
def test_registry_association():
- ds = fake_random_pf(64, nprocs=1, length_unit=10)
+ ds = fake_random_ds(64, nprocs=1, length_unit=10)
a = ds.quan(3, 'cm')
b = YTQuantity(4, 'm')
c = ds.quan(6, '')
@@ -797,7 +797,7 @@
curdir = os.getcwd()
os.chdir(tmpdir)
- ds = fake_random_pf(64, nprocs=1, length_unit=10)
+ ds = fake_random_ds(64, nprocs=1, length_unit=10)
warr = ds.arr(np.random.random((256, 256)), 'code_length')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/units/yt_array.py
--- a/yt/units/yt_array.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/units/yt_array.py Sun Jun 15 19:50:51 2014 -0700
@@ -1032,17 +1032,17 @@
def array_like_field(data, x, field):
field = data._determine_fields(field)[0]
if isinstance(field, tuple):
- units = data.pf._get_field_info(field[0],field[1]).units
+ units = data.ds._get_field_info(field[0],field[1]).units
else:
- units = data.pf._get_field_info(field).units
+ units = data.ds._get_field_info(field).units
if isinstance(x, YTArray):
arr = copy.deepcopy(x)
arr.convert_to_units(units)
return arr
if isinstance(x, np.ndarray):
- return data.pf.arr(x, units)
+ return data.ds.arr(x, units)
else:
- return data.pf.quan(x, units)
+ return data.ds.quan(x, units)
def get_binary_op_return_class(cls1, cls2):
if cls1 is cls2:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py Sun Jun 15 19:50:51 2014 -0700
@@ -42,17 +42,17 @@
[ 1, 1, -1], [ 1, 1, 0], [ 1, 1, 1] ])
class Tree(object):
- def __init__(self, pf, comm_rank=0, comm_size=1, left=None, right=None,
+ def __init__(self, ds, comm_rank=0, comm_size=1, left=None, right=None,
min_level=None, max_level=None, data_source=None):
- self.pf = pf
+ self.ds = ds
try:
- self._id_offset = pf.index.grids[0]._id_offset
+ self._id_offset = ds.index.grids[0]._id_offset
except AttributeError:
self._id_offset = 0
if data_source is None:
- data_source = pf.h.all_data()
+ data_source = ds.all_data()
self.data_source = data_source
if left is None:
left = np.array([-np.inf]*3)
@@ -60,7 +60,7 @@
right = np.array([np.inf]*3)
if min_level is None: min_level = 0
- if max_level is None: max_level = pf.h.max_level
+ if max_level is None: max_level = ds.index.max_level
self.min_level = min_level
self.max_level = max_level
self.comm_rank = comm_rank
@@ -90,7 +90,7 @@
for node in depth_traverse(self.trunk):
if node.grid == -1:
continue
- grid = self.pf.index.grids[node.grid - self._id_offset]
+ grid = self.ds.index.grids[node.grid - self._id_offset]
dds = grid.dds
gle = grid.LeftEdge
gre = grid.RightEdge
@@ -116,7 +116,7 @@
continue
if not all_cells and not kd_is_leaf(node):
continue
- grid = self.pf.index.grids[node.grid - self._id_offset]
+ grid = self.ds.index.grids[node.grid - self._id_offset]
dds = grid.dds
gle = grid.LeftEdge
nle = get_left_edge(node)
@@ -134,30 +134,30 @@
log_fields = None
no_ghost = True
- def __init__(self, pf, min_level=None, max_level=None,
+ def __init__(self, ds, min_level=None, max_level=None,
data_source=None):
ParallelAnalysisInterface.__init__(self)
- self.pf = pf
+ self.ds = ds
self.current_vcds = []
self.current_saved_grids = []
self.bricks = []
self.brick_dimensions = []
- self.sdx = pf.index.get_smallest_dx()
+ self.sdx = ds.index.get_smallest_dx()
self._initialized = False
try:
- self._id_offset = pf.index.grids[0]._id_offset
+ self._id_offset = ds.index.grids[0]._id_offset
except AttributeError:
self._id_offset = 0
if data_source is None:
- data_source = self.pf.h.all_data()
+ data_source = self.ds.all_data()
self.data_source = data_source
mylog.debug('Building AMRKDTree')
- self.tree = Tree(pf, self.comm.rank, self.comm.size,
+ self.tree = Tree(ds, self.comm.rank, self.comm.size,
min_level=min_level, max_level=max_level,
data_source=data_source)
@@ -185,10 +185,10 @@
yield self.get_brick_data(node)
def slice_traverse(self, viewpoint = None):
- if not hasattr(self.pf.h, "grid"):
+ if not hasattr(self.ds.index, "grid"):
raise NotImplementedError
for node in kd_traverse(self.tree.trunk, viewpoint=viewpoint):
- grid = self.pf.index.grids[node.grid - self._id_offset]
+ grid = self.ds.index.grids[node.grid - self._id_offset]
dds = grid.dds
gle = grid.LeftEdge.in_units("code_length").ndarray_view()
nle = get_left_edge(node)
@@ -253,7 +253,7 @@
def get_brick_data(self, node):
if node.data is not None: return node.data
- grid = self.pf.index.grids[node.grid - self._id_offset]
+ grid = self.ds.index.grids[node.grid - self._id_offset]
dds = grid.dds.ndarray_view()
gle = grid.LeftEdge.ndarray_view()
nle = get_left_edge(node)
@@ -340,7 +340,7 @@
in_grid = np.all((new_cis >=0)*
(new_cis < grid.ActiveDimensions),axis=1)
new_positions = position + steps*offs
- new_positions = [periodic_position(p, self.pf) for p in new_positions]
+ new_positions = [periodic_position(p, self.ds) for p in new_positions]
grids[in_grid] = grid
get_them = np.argwhere(in_grid != True).ravel()
@@ -348,7 +348,7 @@
if (in_grid != True).sum()>0:
grids[in_grid != True] = \
- [self.pf.index.grids[self.locate_brick(new_positions[i]).grid -
+ [self.ds.index.grids[self.locate_brick(new_positions[i]).grid -
self._id_offset]
for i in get_them]
cis[in_grid != True] = \
@@ -386,7 +386,7 @@
"""
position = np.array(position)
- grid = self.pf.index.grids[self.locate_brick(position).grid -
+ grid = self.ds.index.grids[self.locate_brick(position).grid -
self._id_offset]
ci = ((position-grid.LeftEdge)/grid.dds).astype('int64')
return self.locate_neighbors(grid,ci)
@@ -395,7 +395,7 @@
if not self._initialized:
self.initialize_source()
if fn is None:
- fn = '%s_kd_bricks.h5'%self.pf
+ fn = '%s_kd_bricks.h5'%self.ds
if self.comm.rank != 0:
self.comm.recv_array(self.comm.rank-1, tag=self.comm.rank-1)
f = h5py.File(fn,'w')
@@ -415,7 +415,7 @@
def load_kd_bricks(self,fn=None):
if fn is None:
- fn = '%s_kd_bricks.h5' % self.pf
+ fn = '%s_kd_bricks.h5' % self.ds
if self.comm.rank != 0:
self.comm.recv_array(self.comm.rank-1, tag=self.comm.rank-1)
try:
@@ -539,11 +539,11 @@
if __name__ == "__main__":
from yt.mods import *
from time import time
- pf = load('/Users/skillman/simulations/DD1717/DD1717')
- pf.h
+ ds = load('/Users/skillman/simulations/DD1717/DD1717')
+ ds.index
t1 = time()
- hv = AMRKDTree(pf)
+ hv = AMRKDTree(ds)
t2 = time()
print kd_sum_volume(hv.tree.trunk)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/README
--- a/yt/utilities/answer_testing/README Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/README Sun Jun 15 19:50:51 2014 -0700
@@ -82,7 +82,7 @@
This is a test case designed to handle a single test.
Additional Attributes:
- * filename => The parameter file to test
+ * filename => The dataset to test
Additional Methods:
* None
@@ -136,8 +136,8 @@
name = "maximum_density"
def run(self):
- # self.pf already exists
- value, center = self.pf.h.find_max("Density")
+ # self.ds already exists
+ value, center = self.ds.find_max("density")
self.result = (value, center)
def compare(self, old_result):
@@ -222,8 +222,8 @@
field = None
def run(self):
- # self.pf already exists
- value, center = self.pf.h.find_max(self.field)
+ # self.ds already exists
+ value, center = self.ds.find_max(self.field)
self.result = (value, center)
def compare(self, old_result):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/boolean_region_tests.py
--- a/yt/utilities/answer_testing/boolean_region_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/boolean_region_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -9,13 +9,13 @@
# be identical for the AND operator.
class TestBooleanANDGridQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
- re = self.pf.boolean([re1, "AND", re2])
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
+ re = self.ds.boolean([re1, "AND", re2])
# re should look like re2.
x2 = re2['x']
x = re['x']
@@ -32,13 +32,13 @@
# OR
class TestBooleanORGridQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
- re = self.pf.boolean([re1, "OR", re2])
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
+ re = self.ds.boolean([re1, "OR", re2])
# re should look like re1
x1 = re1['x']
x = re['x']
@@ -55,23 +55,23 @@
# NOT
class TestBooleanNOTGridQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
# Bottom base
- re3 = self.pf.region(five, four, [six[0], six[1], five[2]])
+ re3 = self.ds.region(five, four, [six[0], six[1], five[2]])
# Side
- re4 = self.pf.region(five, [four[0], four[1], five[2]],
+ re4 = self.ds.region(five, [four[0], four[1], five[2]],
[five[0], six[1], six[2]])
# Last small cube
- re5 = self.pf.region(five, [five[0], four[0], four[2]],
+ re5 = self.ds.region(five, [five[0], four[0], four[2]],
[six[0], five[1], six[2]])
# re1 NOT re2 should look like re3 OR re4 OR re5
- re = self.pf.boolean([re1, "NOT", re2])
- reo = self.pf.boolean([re3, "OR", re4, "OR", re5])
+ re = self.ds.boolean([re1, "NOT", re2])
+ reo = self.ds.boolean([re3, "OR", re4, "OR", re5])
x = re['x']
xo = reo['x']
x = x[x.argsort()]
@@ -88,13 +88,13 @@
# be identical for the AND operator.
class TestBooleanANDParticleQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
- re = self.pf.boolean([re1, "AND", re2])
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
+ re = self.ds.boolean([re1, "AND", re2])
# re should look like re2.
x2 = re2['particle_position_x']
x = re['particle_position_x']
@@ -111,13 +111,13 @@
# OR
class TestBooleanORParticleQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
- re = self.pf.boolean([re1, "OR", re2])
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
+ re = self.ds.boolean([re1, "OR", re2])
# re should look like re1
x1 = re1['particle_position_x']
x = re['particle_position_x']
@@ -134,23 +134,23 @@
# NOT
class TestBooleanNOTParticleQuantity(YTDatasetTest):
def run(self):
- domain = self.pf.domain_right_edge - self.pf.domain_left_edge
- four = 0.4 * domain + self.pf.domain_left_edge
- five = 0.5 * domain + self.pf.domain_left_edge
- six = 0.6 * domain + self.pf.domain_left_edge
- re1 = self.pf.region(five, four, six)
- re2 = self.pf.region(five, five, six)
+ domain = self.ds.domain_right_edge - self.ds.domain_left_edge
+ four = 0.4 * domain + self.ds.domain_left_edge
+ five = 0.5 * domain + self.ds.domain_left_edge
+ six = 0.6 * domain + self.ds.domain_left_edge
+ re1 = self.ds.region(five, four, six)
+ re2 = self.ds.region(five, five, six)
# Bottom base
- re3 = self.pf.region(five, four, [six[0], six[1], five[2]])
+ re3 = self.ds.region(five, four, [six[0], six[1], five[2]])
# Side
- re4 = self.pf.region(five, [four[0], four[1], five[2]],
+ re4 = self.ds.region(five, [four[0], four[1], five[2]],
[five[0], six[1], six[2]])
# Last small cube
- re5 = self.pf.region(five, [five[0], four[0], four[2]],
+ re5 = self.ds.region(five, [five[0], four[0], four[2]],
[six[0], five[1], six[2]])
# re1 NOT re2 should look like re3 OR re4 OR re5
- re = self.pf.boolean([re1, "NOT", re2])
- reo = self.pf.boolean([re3, "OR", re4, "OR", re5])
+ re = self.ds.boolean([re1, "NOT", re2])
+ reo = self.ds.boolean([re3, "OR", re4, "OR", re5])
x = re['particle_position_x']
xo = reo['particle_position_x']
x = x[x.argsort()]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/default_tests.py
--- a/yt/utilities/answer_testing/default_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/default_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,9 +23,9 @@
def run(self):
# We're going to calculate the field statistics for every single field.
results = {}
- for field in self.pf.field_list:
+ for field in self.ds.field_list:
# Do it here so that it gets wiped each iteration
- dd = self.pf.h.all_data()
+ dd = self.ds.all_data()
results[field] = (dd[field].std(),
dd[field].mean(),
dd[field].min(),
@@ -45,11 +45,11 @@
def run(self):
results = {}
- for field in self.pf.field_list:
- if self.pf.field_info[field].particle_type: continue
+ for field in self.ds.field_list:
+ if self.ds.field_info[field].particle_type: continue
results[field] = []
for ax in range(3):
- t = self.pf.proj(field, ax)
+ t = self.ds.proj(field, ax)
results[field].append(t.field_data)
self.result = results
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/framework.py Sun Jun 15 19:50:51 2014 -0700
@@ -161,14 +161,14 @@
self.cache = {}
def dump(self, result_storage, result):
raise NotImplementedError
- def get(self, pf_name, default=None):
+ def get(self, ds_name, default=None):
raise NotImplementedError
class AnswerTestCloudStorage(AnswerTestStorage):
- def get(self, pf_name, default = None):
+ def get(self, ds_name, default = None):
if self.reference_name is None: return default
- if pf_name in self.cache: return self.cache[pf_name]
- url = _url_path % (self.reference_name, pf_name)
+ if ds_name in self.cache: return self.cache[ds_name]
+ url = _url_path % (self.reference_name, ds_name)
try:
resp = urllib2.urlopen(url)
except urllib2.HTTPError as ex:
@@ -187,7 +187,7 @@
raise YTCloudError(url)
# This is dangerous, but we have a controlled S3 environment
rv = cPickle.loads(data)
- self.cache[pf_name] = rv
+ self.cache[ds_name] = rv
return rv
def progress_callback(self, current, total):
@@ -202,10 +202,10 @@
cf = pyrax.cloudfiles
c = cf.get_container("yt-answer-tests")
pb = get_pbar("Storing results ", len(result_storage))
- for i, pf_name in enumerate(result_storage):
+ for i, ds_name in enumerate(result_storage):
pb.update(i)
- rs = cPickle.dumps(result_storage[pf_name])
- object_name = "%s_%s" % (self.answer_name, pf_name)
+ rs = cPickle.dumps(result_storage[ds_name])
+ object_name = "%s_%s" % (self.answer_name, ds_name)
if object_name in c.get_object_names():
obj = c.get_object(object_name)
c.delete_object(obj)
@@ -217,17 +217,17 @@
if self.answer_name is None: return
# Store data using shelve
ds = shelve.open(self.answer_name, protocol=-1)
- for pf_name in result_storage:
- answer_name = "%s" % pf_name
+ for ds_name in result_storage:
+ answer_name = "%s" % ds_name
if answer_name in ds:
mylog.info("Overwriting %s", answer_name)
- ds[answer_name] = result_storage[pf_name]
+ ds[answer_name] = result_storage[ds_name]
ds.close()
- def get(self, pf_name, default=None):
+ def get(self, ds_name, default=None):
if self.reference_name is None: return default
# Read data using shelve
- answer_name = "%s" % pf_name
+ answer_name = "%s" % ds_name
ds = shelve.open(self.reference_name, protocol=-1)
try:
result = ds[answer_name]
@@ -243,36 +243,36 @@
yield
os.chdir(oldcwd)
-def can_run_pf(pf_fn, file_check = False):
- if isinstance(pf_fn, Dataset):
+def can_run_ds(ds_fn, file_check = False):
+ if isinstance(ds_fn, Dataset):
return AnswerTestingTest.result_storage is not None
path = ytcfg.get("yt", "test_data_dir")
if not os.path.isdir(path):
return False
with temp_cwd(path):
if file_check:
- return os.path.isfile(pf_fn) and \
+ return os.path.isfile(ds_fn) and \
AnswerTestingTest.result_storage is not None
try:
- load(pf_fn)
+ load(ds_fn)
except YTOutputNotIdentified:
return False
return AnswerTestingTest.result_storage is not None
-def data_dir_load(pf_fn, cls = None, args = None, kwargs = None):
+def data_dir_load(ds_fn, cls = None, args = None, kwargs = None):
path = ytcfg.get("yt", "test_data_dir")
- if isinstance(pf_fn, Dataset): return pf_fn
+ if isinstance(ds_fn, Dataset): return ds_fn
if not os.path.isdir(path):
return False
with temp_cwd(path):
if cls is None:
- pf = load(pf_fn)
+ ds = load(ds_fn)
else:
args = args or ()
kwargs = kwargs or {}
- pf = cls(pf_fn, *args, **kwargs)
- pf.h
- return pf
+ ds = cls(ds_fn, *args, **kwargs)
+ ds.index
+ return ds
def sim_dir_load(sim_fn, path = None, sim_type = "Enzo",
find_outputs=False):
@@ -288,8 +288,8 @@
reference_storage = None
result_storage = None
prefix = ""
- def __init__(self, pf_fn):
- self.pf = data_dir_load(pf_fn)
+ def __init__(self, ds_fn):
+ self.ds = data_dir_load(ds_fn)
def __call__(self):
nv = self.run()
@@ -306,20 +306,20 @@
@property
def storage_name(self):
if self.prefix != "":
- return "%s_%s" % (self.prefix, self.pf)
- return str(self.pf)
+ return "%s_%s" % (self.prefix, self.ds)
+ return str(self.ds)
def compare(self, new_result, old_result):
raise RuntimeError
- def create_plot(self, pf, plot_type, plot_field, plot_axis, plot_kwargs = None):
+ def create_plot(self, ds, plot_type, plot_field, plot_axis, plot_kwargs = None):
# plot_type should be a string
# plot_args should be a tuple
# plot_kwargs should be a dict
if plot_type is None:
raise RuntimeError('Must explicitly request a plot type')
cls = getattr(pw, plot_type)
- plot = cls(*(pf, plot_axis, plot_field), **plot_kwargs)
+ plot = cls(*(ds, plot_axis, plot_field), **plot_kwargs)
return plot
@property
@@ -327,7 +327,7 @@
"""
This returns the center of the domain.
"""
- return 0.5*(self.pf.domain_right_edge + self.pf.domain_left_edge)
+ return 0.5*(self.ds.domain_right_edge + self.ds.domain_left_edge)
@property
def max_dens_location(self):
@@ -335,14 +335,14 @@
This is a helper function to return the location of the most dense
point.
"""
- return self.pf.h.find_max("density")[1]
+ return self.ds.find_max("density")[1]
@property
def entire_simulation(self):
"""
Return an unsorted array of values that cover the entire domain.
"""
- return self.pf.h.all_data()
+ return self.ds.all_data()
@property
def description(self):
@@ -351,7 +351,7 @@
oname = "all"
else:
oname = "_".join((str(s) for s in obj_type))
- args = [self._type_name, str(self.pf), oname]
+ args = [self._type_name, str(self.ds), oname]
args += [str(getattr(self, an)) for an in self._attrs]
return "_".join(args)
@@ -359,15 +359,15 @@
_type_name = "FieldValues"
_attrs = ("field", )
- def __init__(self, pf_fn, field, obj_type = None,
+ def __init__(self, ds_fn, field, obj_type = None,
decimals = 10):
- super(FieldValuesTest, self).__init__(pf_fn)
+ super(FieldValuesTest, self).__init__(ds_fn)
self.obj_type = obj_type
self.field = field
self.decimals = decimals
def run(self):
- obj = create_obj(self.pf, self.obj_type)
+ obj = create_obj(self.ds, self.obj_type)
avg = obj.quantities.weighted_average_quantity(
self.field, weight="ones")
mi, ma = obj.quantities.extrema(self.field)
@@ -386,15 +386,15 @@
_type_name = "AllFieldValues"
_attrs = ("field", )
- def __init__(self, pf_fn, field, obj_type = None,
+ def __init__(self, ds_fn, field, obj_type = None,
decimals = None):
- super(AllFieldValuesTest, self).__init__(pf_fn)
+ super(AllFieldValuesTest, self).__init__(ds_fn)
self.obj_type = obj_type
self.field = field
self.decimals = decimals
def run(self):
- obj = create_obj(self.pf, self.obj_type)
+ obj = create_obj(self.ds, self.obj_type)
return obj[self.field]
def compare(self, new_result, old_result):
@@ -410,9 +410,9 @@
_type_name = "ProjectionValues"
_attrs = ("field", "axis", "weight_field")
- def __init__(self, pf_fn, axis, field, weight_field = None,
+ def __init__(self, ds_fn, axis, field, weight_field = None,
obj_type = None, decimals = None):
- super(ProjectionValuesTest, self).__init__(pf_fn)
+ super(ProjectionValuesTest, self).__init__(ds_fn)
self.axis = axis
self.field = field
self.weight_field = weight_field
@@ -421,11 +421,11 @@
def run(self):
if self.obj_type is not None:
- obj = create_obj(self.pf, self.obj_type)
+ obj = create_obj(self.ds, self.obj_type)
else:
obj = None
- if self.pf.domain_dimensions[self.axis] == 1: return None
- proj = self.pf.proj(self.field, self.axis,
+ if self.ds.domain_dimensions[self.axis] == 1: return None
+ proj = self.ds.proj(self.field, self.axis,
weight_field=self.weight_field,
data_source = obj)
return proj.field_data
@@ -461,9 +461,9 @@
_type_name = "PixelizedProjectionValues"
_attrs = ("field", "axis", "weight_field")
- def __init__(self, pf_fn, axis, field, weight_field = None,
+ def __init__(self, ds_fn, axis, field, weight_field = None,
obj_type = None):
- super(PixelizedProjectionValuesTest, self).__init__(pf_fn)
+ super(PixelizedProjectionValuesTest, self).__init__(ds_fn)
self.axis = axis
self.field = field
self.weight_field = field
@@ -471,10 +471,10 @@
def run(self):
if self.obj_type is not None:
- obj = create_obj(self.pf, self.obj_type)
+ obj = create_obj(self.ds, self.obj_type)
else:
obj = None
- proj = self.pf.proj(self.field, self.axis,
+ proj = self.ds.proj(self.field, self.axis,
weight_field=self.weight_field,
data_source = obj)
frb = proj.to_frb((1.0, 'unitary'), 256)
@@ -497,13 +497,13 @@
_type_name = "GridValues"
_attrs = ("field",)
- def __init__(self, pf_fn, field):
- super(GridValuesTest, self).__init__(pf_fn)
+ def __init__(self, ds_fn, field):
+ super(GridValuesTest, self).__init__(ds_fn)
self.field = field
def run(self):
hashes = {}
- for g in self.pf.index.grids:
+ for g in self.ds.index.grids:
hashes[g.id] = hashlib.md5(g[self.field].tostring()).hexdigest()
g.clear_data()
return hashes
@@ -520,10 +520,10 @@
_attrs = ()
def __init__(self, simulation_obj):
- self.pf = simulation_obj
+ self.ds = simulation_obj
def run(self):
- result = [ds.current_time for ds in self.pf]
+ result = [ds.current_time for ds in self.ds]
return result
def compare(self, new_result, old_result):
@@ -541,11 +541,11 @@
def run(self):
result = {}
- result["grid_dimensions"] = self.pf.index.grid_dimensions
- result["grid_left_edges"] = self.pf.index.grid_left_edge
- result["grid_right_edges"] = self.pf.index.grid_right_edge
- result["grid_levels"] = self.pf.index.grid_levels
- result["grid_particle_count"] = self.pf.index.grid_particle_count
+ result["grid_dimensions"] = self.ds.index.grid_dimensions
+ result["grid_left_edges"] = self.ds.index.grid_left_edge
+ result["grid_right_edges"] = self.ds.index.grid_right_edge
+ result["grid_levels"] = self.ds.index.grid_levels
+ result["grid_particle_count"] = self.ds.index.grid_particle_count
return result
def compare(self, new_result, old_result):
@@ -559,7 +559,7 @@
result = {}
result["parents"] = []
result["children"] = []
- for g in self.pf.index.grids:
+ for g in self.ds.index.grids:
p = g.Parent
if p is None:
result["parents"].append(None)
@@ -589,9 +589,9 @@
class PlotWindowAttributeTest(AnswerTestingTest):
_type_name = "PlotWindowAttribute"
_attrs = ('plot_type', 'plot_field', 'plot_axis', 'attr_name', 'attr_args')
- def __init__(self, pf_fn, plot_field, plot_axis, attr_name, attr_args,
+ def __init__(self, ds_fn, plot_field, plot_axis, attr_name, attr_args,
decimals, plot_type = 'SlicePlot'):
- super(PlotWindowAttributeTest, self).__init__(pf_fn)
+ super(PlotWindowAttributeTest, self).__init__(ds_fn)
self.plot_type = plot_type
self.plot_field = plot_field
self.plot_axis = plot_axis
@@ -601,7 +601,7 @@
self.decimals = decimals
def run(self):
- plot = self.create_plot(self.pf, self.plot_type, self.plot_field,
+ plot = self.create_plot(self.ds, self.plot_type, self.plot_field,
self.plot_axis, self.plot_kwargs)
attr = getattr(plot, self.attr_name)
attr(*self.attr_args[0], **self.attr_args[1])
@@ -618,8 +618,8 @@
class GenericArrayTest(AnswerTestingTest):
_type_name = "GenericArray"
_attrs = ('array_func_name','args','kwargs')
- def __init__(self, pf_fn, array_func, args=None, kwargs=None, decimals=None):
- super(GenericArrayTest, self).__init__(pf_fn)
+ def __init__(self, ds_fn, array_func, args=None, kwargs=None, decimals=None):
+ super(GenericArrayTest, self).__init__(ds_fn)
self.array_func = array_func
self.array_func_name = array_func.func_name
self.args = args
@@ -648,8 +648,8 @@
class GenericImageTest(AnswerTestingTest):
_type_name = "GenericImage"
_attrs = ('image_func_name','args','kwargs')
- def __init__(self, pf_fn, image_func, decimals, args=None, kwargs=None):
- super(GenericImageTest, self).__init__(pf_fn)
+ def __init__(self, ds_fn, image_func, decimals, args=None, kwargs=None):
+ super(GenericImageTest, self).__init__(ds_fn)
self.image_func = image_func
self.image_func_name = image_func.func_name
self.args = args
@@ -679,54 +679,54 @@
compare_image_lists(new_result, old_result, self.decimals)
-def requires_pf(pf_fn, big_data = False, file_check = False):
+def requires_ds(ds_fn, big_data = False, file_check = False):
def ffalse(func):
return lambda: None
def ftrue(func):
return func
if run_big_data == False and big_data == True:
return ffalse
- elif not can_run_pf(pf_fn, file_check):
+ elif not can_run_ds(ds_fn, file_check):
return ffalse
else:
return ftrue
-def small_patch_amr(pf_fn, fields):
- if not can_run_pf(pf_fn): return
+def small_patch_amr(ds_fn, fields):
+ if not can_run_ds(ds_fn): return
dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
- yield GridHierarchyTest(pf_fn)
- yield ParentageRelationshipsTest(pf_fn)
+ yield GridHierarchyTest(ds_fn)
+ yield ParentageRelationshipsTest(ds_fn)
for field in fields:
- yield GridValuesTest(pf_fn, field)
+ yield GridValuesTest(ds_fn, field)
for axis in [0, 1, 2]:
for ds in dso:
for weight_field in [None, "density"]:
yield ProjectionValuesTest(
- pf_fn, axis, field, weight_field,
+ ds_fn, axis, field, weight_field,
ds)
yield FieldValuesTest(
- pf_fn, field, ds)
+ ds_fn, field, ds)
-def big_patch_amr(pf_fn, fields):
- if not can_run_pf(pf_fn): return
+def big_patch_amr(ds_fn, fields):
+ if not can_run_ds(ds_fn): return
dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
- yield GridHierarchyTest(pf_fn)
- yield ParentageRelationshipsTest(pf_fn)
+ yield GridHierarchyTest(ds_fn)
+ yield ParentageRelationshipsTest(ds_fn)
for field in fields:
- yield GridValuesTest(pf_fn, field)
+ yield GridValuesTest(ds_fn, field)
for axis in [0, 1, 2]:
for ds in dso:
for weight_field in [None, "density"]:
yield PixelizedProjectionValuesTest(
- pf_fn, axis, field, weight_field,
+ ds_fn, axis, field, weight_field,
ds)
-def create_obj(pf, obj_type):
+def create_obj(ds, obj_type):
# obj_type should be tuple of
# ( obj_name, ( args ) )
if obj_type is None:
- return pf.h.all_data()
- cls = getattr(pf.h, obj_type[0])
+ return ds.all_data()
+ cls = getattr(ds, obj_type[0])
obj = cls(*obj_type[1])
return obj
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/halo_tests.py
--- a/yt/utilities/answer_testing/halo_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/halo_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -12,7 +12,7 @@
def run(self):
# Find the halos using vanilla HOP.
- halos = HaloFinder(self.pf, threshold=self.threshold, dm_only=False)
+ halos = HaloFinder(self.ds, threshold=self.threshold, dm_only=False)
# We only care about the number of halos.
self.result = len(halos)
@@ -30,7 +30,7 @@
def run(self):
# Find the halos using FOF.
- halos = FOFHaloFinder(self.pf, link=self.link, dm_only=False,
+ halos = FOFHaloFinder(self.ds, link=self.link, dm_only=False,
padding=self.padding)
# We only care about the number of halos.
self.result = len(halos)
@@ -49,7 +49,7 @@
def run(self):
# Find the halos using parallel HOP.
- halos = parallelHF(self.pf, threshold=self.threshold, dm_only=False)
+ halos = parallelHF(self.ds, threshold=self.threshold, dm_only=False)
# We only care about the number of halos.
self.result = len(halos)
@@ -65,7 +65,7 @@
def run(self):
# Find the halos using vanilla HOP.
- halos = HaloFinder(self.pf, threshold=self.threshold, dm_only=False)
+ halos = HaloFinder(self.ds, threshold=self.threshold, dm_only=False)
# The result is a list of the particle IDs, stored
# as sets for easy comparison.
IDs = []
@@ -89,7 +89,7 @@
def run(self):
# Find the halos using vanilla HOP.
- halos = HaloFinder(self.pf, threshold=self.threshold, dm_only=False)
+ halos = HaloFinder(self.ds, threshold=self.threshold, dm_only=False)
# The result is a flattened array of the arrays of the particle IDs for
# each halo
IDs = []
@@ -117,7 +117,7 @@
def run(self):
# Find the halos using vanilla FOF.
- halos = FOFHaloFinder(self.pf, link=self.link, dm_only=False,
+ halos = FOFHaloFinder(self.ds, link=self.link, dm_only=False,
padding=self.padding)
# The result is a flattened array of the arrays of the particle IDs for
# each halo
@@ -145,7 +145,7 @@
def run(self):
# Find the halos using parallel HOP.
- halos = parallelHF(self.pf, threshold=self.threshold, dm_only=False)
+ halos = parallelHF(self.ds, threshold=self.threshold, dm_only=False)
# The result is a flattened array of the arrays of the particle IDs for
# each halo
IDs = []
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/hydro_tests.py
--- a/yt/utilities/answer_testing/hydro_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/hydro_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -27,12 +27,12 @@
def run(self):
# First we get our flattened projection -- this is the
# Density, px, py, pdx, and pdy
- proj = self.pf.proj(self.field, self.axis,
+ proj = self.ds.proj(self.field, self.axis,
weight_field=self.weight_field)
# Now let's stick it in a buffer
pixelized_proj = self.pixelize(proj, self.field)
# We just want the values, so this can be stored
- # independently of the parameter file.
+ # independently of the dataset.
# The .field_data attributes strip out everything other than the actual array
# values.
self.result = (proj.field_data, pixelized_proj.data)
@@ -51,7 +51,7 @@
pylab.clf()
pylab.imshow(self.result[1][self.field],
interpolation='nearest', origin='lower')
- fn = "%s_%s_%s_projection.png" % (self.pf, self.field,
+ fn = "%s_%s_%s_projection.png" % (self.ds, self.field,
self.weight_field)
pylab.savefig(fn)
return [fn]
@@ -63,9 +63,9 @@
def run(self):
# Here proj will just be the data array.
- proj = off_axis_projection(self.pf,
- (0.5 * (self.pf.domain_left_edge +
- self.pf.domain_right_edge)),
+ proj = off_axis_projection(self.ds,
+ (0.5 * (self.ds.domain_left_edge +
+ self.ds.domain_right_edge)),
[1., 1., 1.], 1., 400,
self.field, weight=self.weight_field)
@@ -80,7 +80,7 @@
def plot(self):
fn = "%s_%s_%s_off-axis_projection.png" % \
- (self.pf, self.field, self.weight_field)
+ (self.ds, self.field, self.weight_field)
write_image(self.result, fn)
return [fn]
@@ -90,15 +90,15 @@
def run(self):
np.random.seed(4333)
- start_point = np.random.random(self.pf.dimensionality) * \
- (self.pf.domain_right_edge - self.pf.domain_left_edge) + \
- self.pf.domain_left_edge
- end_point = np.random.random(self.pf.dimensionality) * \
- (self.pf.domain_right_edge - self.pf.domain_left_edge) + \
- self.pf.domain_left_edge
+ start_point = np.random.random(self.ds.dimensionality) * \
+ (self.ds.domain_right_edge - self.ds.domain_left_edge) + \
+ self.ds.domain_left_edge
+ end_point = np.random.random(self.ds.dimensionality) * \
+ (self.ds.domain_right_edge - self.ds.domain_left_edge) + \
+ self.ds.domain_left_edge
# Here proj will just be the data array.
- ray = self.pf.ray(start_point, end_point, field=self.field)
+ ray = self.ds.ray(start_point, end_point, field=self.field)
# values.
self.result = ray[self.field]
@@ -119,9 +119,9 @@
def run(self):
# Here proj will just be the data array.
- slice = self.pf.slice(self.axis,
- (0.5 * (self.pf.domain_left_edge +
- self.pf.domain_right_edge))[self.axis],
+ slice = self.ds.slice(self.axis,
+ (0.5 * (self.ds.domain_left_edge +
+ self.ds.domain_right_edge))[self.axis],
fields=self.field)
# values.
self.result = slice.field_data
@@ -133,7 +133,7 @@
self.compare_data_arrays(slice, oslice)
def plot(self):
- fn = "%s_%s_slice.png" % (self.pf, self.field)
+ fn = "%s_%s_slice.png" % (self.ds, self.field)
write_image(self.result[self.field], fn)
return [fn]
@@ -155,7 +155,7 @@
# We're NOT going to use the low-level profiling API here,
# because we are avoiding the calculations of min/max,
# as those should be tested in another test.
- pc = PlotCollection(self.pf, center=self.sim_center)
+ pc = PlotCollection(self.ds, center=self.sim_center)
p = pc.add_profile_object(self.entire_simulation,
[self.field_x, self.field_y], x_bins = self.n_bins,
weight=self.weight)
@@ -185,7 +185,7 @@
# We're NOT going to use the low-level profiling API here,
# because we are avoiding the calculations of min/max,
# as those should be tested in another test.
- pc = PlotCollection(self.pf, center=self.sim_center)
+ pc = PlotCollection(self.ds, center=self.sim_center)
p = pc.add_phase_object(self.entire_simulation,
[self.field_x, self.field_y, self.field_z], x_bins = self.x_bins, y_bins = self.y_bins,
weight=self.weight)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/output_tests.py
--- a/yt/utilities/answer_testing/output_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/output_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -170,21 +170,21 @@
class YTDatasetTest(SingleOutputTest):
def setup(self):
- self.pf = load(self.filename)
+ self.ds = load(self.filename)
def pixelize(self, data, field, edges = None, dims = (512, 512)):
"""
This is a helper function that returns a 2D array of the specified
source, in the specified field, at the specified spatial extent.
"""
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
if edges is None:
- edges = (self.pf.domain_left_edge[xax],
- self.pf.domain_right_edge[xax],
- self.pf.domain_left_edge[yax],
- self.pf.domain_right_edge[yax])
+ edges = (self.ds.domain_left_edge[xax],
+ self.ds.domain_right_edge[xax],
+ self.ds.domain_left_edge[yax],
+ self.ds.domain_right_edge[yax])
frb = FixedResolutionBuffer( data, edges, dims)
frb[field] # To make the pixelization
return frb
@@ -204,7 +204,7 @@
"""
This returns the center of the domain.
"""
- return 0.5*(self.pf.domain_right_edge + self.pf.domain_left_edge)
+ return 0.5*(self.ds.domain_right_edge + self.ds.domain_left_edge)
@property
def max_dens_location(self):
@@ -212,13 +212,13 @@
This is a helper function to return the location of the most dense
point.
"""
- return self.pf.h.find_max("density")[1]
+ return self.ds.find_max("density")[1]
@property
def entire_simulation(self):
"""
Return an unsorted array of values that cover the entire domain.
"""
- return self.pf.h.all_data()
+ return self.ds.all_data()
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/answer_testing/particle_tests.py
--- a/yt/utilities/answer_testing/particle_tests.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/answer_testing/particle_tests.py Sun Jun 15 19:50:51 2014 -0700
@@ -7,7 +7,7 @@
def run(self):
# Test to make sure that all the particles have unique IDs.
- all = self.pf.h.all_data()
+ all = self.ds.all_data()
IDs = all["particle_index"]
# Make sure the order is the same every time.
IDs = IDs[IDs.argsort()]
@@ -31,7 +31,7 @@
def run(self):
# Tests to make sure there are no particle positions aren't changing
# drastically. This is very unlikely to be a problem.
- all = self.pf.h.all_data()
+ all = self.ds.all_data()
min = np.empty(3,dtype='float64')
max = min.copy()
dims = ["particle_position_x","particle_position_y",
@@ -48,8 +48,8 @@
self.compare_array_delta(min, old_min, 1e-7)
self.compare_array_delta(max, old_max, 1e-7)
# Also, the min/max shouldn't be outside the boundaries.
- if (min < self.pf.domain_left_edge).any(): return False
- if (max > self.pf.domain_right_edge).any(): return False
+ if (min < self.ds.domain_left_edge).any(): return False
+ if (max > self.ds.domain_right_edge).any(): return False
return True
def plot(self):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/command_line.py
--- a/yt/utilities/command_line.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/command_line.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,18 +23,18 @@
import argparse, os, os.path, math, sys, time, subprocess, getpass, tempfile
import urllib, urllib2, base64, os
-def _fix_pf(arg):
+def _fix_ds(arg):
if os.path.isdir("%s" % arg) and \
os.path.exists("%s/%s" % (arg,arg)):
- pf = load("%s/%s" % (arg,arg))
+ ds = load("%s/%s" % (arg,arg))
elif os.path.isdir("%s.dir" % arg) and \
os.path.exists("%s.dir/%s" % (arg,arg)):
- pf = load("%s.dir/%s" % (arg,arg))
+ ds = load("%s.dir/%s" % (arg,arg))
elif arg.endswith(".index"):
- pf = load(arg[:-10])
+ ds = load(arg[:-10])
else:
- pf = load(arg)
- return pf
+ ds = load(arg)
+ return ds
def _add_arg(sc, arg):
if isinstance(arg, types.StringTypes):
@@ -64,49 +64,49 @@
name = None
description = ""
aliases = ()
- npfs = 1
+ ndatasets = 1
@classmethod
def run(cls, args):
self = cls()
- # Some commands need to be run repeatedly on parameter files
+ # Some commands need to be run repeatedly on datasets
# In fact, this is the rule and the opposite is the exception
# BUT, we only want to parse the arguments once.
- if cls.npfs > 1:
+ if cls.ndatasets > 1:
self(args)
else:
- pf_args = getattr(args, "pf", [])
- if len(pf_args) > 1:
- pfs = args.pf
- for pf in pfs:
- args.pf = pf
+ ds_args = getattr(args, "ds", [])
+ if len(ds_args) > 1:
+ datasets = args.ds
+ for ds in datasets:
+ args.ds = ds
self(args)
- elif len(pf_args) == 0:
- pfs = []
+ elif len(ds_args) == 0:
+ datasets = []
self(args)
else:
- args.pf = getattr(args, 'pf', [None])[0]
+ args.ds = getattr(args, 'ds', [None])[0]
self(args)
class GetParameterFiles(argparse.Action):
def __call__(self, parser, namespace, values, option_string = None):
if len(values) == 1:
- pfs = values
+ datasets = values
elif len(values) == 2 and namespace.basename is not None:
- pfs = ["%s%04i" % (namespace.basename, r)
+ datasets = ["%s%04i" % (namespace.basename, r)
for r in range(int(values[0]), int(values[1]), namespace.skip) ]
else:
- pfs = values
- namespace.pf = [_fix_pf(pf) for pf in pfs]
+ datasets = values
+ namespace.ds = [_fix_ds(ds) for ds in datasets]
_common_options = dict(
all = dict(longname="--all", dest="reinstall",
default=False, action="store_true",
help="Reinstall the full yt stack in the current location."),
- pf = dict(short="pf", action=GetParameterFiles,
- nargs="+", help="Parameter files to run on"),
- opf = dict(action=GetParameterFiles, dest="pf",
- nargs="*", help="(Optional) Parameter files to run on"),
+ ds = dict(short="ds", action=GetParameterFiles,
+ nargs="+", help="datasets to run on"),
+ ods = dict(action=GetParameterFiles, dest="ds",
+ nargs="*", help="(Optional) datasets to run on"),
axis = dict(short="-a", longname="--axis",
action="store", type=int,
dest="axis", default=4,
@@ -165,7 +165,7 @@
bn = dict(short="-b", longname="--basename",
action="store", type=str,
dest="basename", default=None,
- help="Basename of parameter files"),
+ help="Basename of datasets"),
output = dict(short="-o", longname="--output",
action="store", type=str,
dest="output", default="frames/",
@@ -214,7 +214,7 @@
uboxes = dict(longname="--unit-boxes",
action="store_true",
dest="unit_boxes",
- help="Display helpful unit boxes"),
+ help="Display heldsul unit boxes"),
thresh = dict(longname="--threshold",
action="store", type=float,
dest="threshold", default=None,
@@ -278,10 +278,10 @@
action="store", type=str,
dest="halo_hop_style",default="new",
help="Style of hop output file. 'new' for yt_hop files and 'old' for enzo_hop files."),
- halo_parameter_file = dict(longname="--halo_parameter_file",
+ halo_dataset = dict(longname="--halo_dataset",
action="store", type=str,
- dest="halo_parameter_file",default=None,
- help="HaloProfiler parameter file."),
+ dest="halo_dataset",default=None,
+ help="HaloProfiler dataset."),
make_profiles = dict(longname="--make_profiles",
action="store_true", default=False,
help="Make profiles with halo profiler."),
@@ -709,7 +709,7 @@
print
print "Press enter to spawn your editor, %s" % os.environ["EDITOR"]
loki = raw_input()
- tf = tempfile.NamedTemporaryFile(delete=False)
+ tf = temdsile.NamedTemporaryFile(delete=False)
fn = tf.name
tf.close()
popen = subprocess.call("$EDITOR %s" % fn, shell = True)
@@ -769,7 +769,7 @@
print
class YTHopCmd(YTCommand):
- args = ('outputfn','bn','thresh','dm_only','skip', 'pf')
+ args = ('outputfn','bn','thresh','dm_only','skip', 'ds')
name = "hop"
description = \
"""
@@ -778,11 +778,11 @@
"""
def __call__(self, args):
- pf = args.pf
+ ds = args.ds
kwargs = {'dm_only' : args.dm_only}
if args.threshold is not None: kwargs['threshold'] = args.threshold
- hop_list = HaloFinder(pf, **kwargs)
- if args.output is None: fn = "%s.hop" % pf
+ hop_list = HaloFinder(ds, **kwargs)
+ if args.output is None: fn = "%s.hop" % ds
else: fn = args.output
hop_list.write_out(fn)
@@ -1105,10 +1105,10 @@
"""
- args = ("pf", )
+ args = ("ds", )
def __call__(self, args):
- if args.pf is None:
+ if args.ds is None:
print "Could not load file."
sys.exit()
import yt.mods
@@ -1121,13 +1121,13 @@
api_version = '0.11'
local_ns = yt.mods.__dict__.copy()
- local_ns['ds'] = args.pf
+ local_ns['ds'] = args.ds
if api_version == '0.10':
shell = IPython.Shell.IPShellEmbed()
shell(local_ns = local_ns,
header =
- "\nHi there! Welcome to yt.\n\nWe've loaded your parameter file as 'ds'. Enjoy!"
+ "\nHi there! Welcome to yt.\n\nWe've loaded your dataset as 'ds'. Enjoy!"
)
else:
from IPython.config.loader import Config
@@ -1144,7 +1144,7 @@
dest="axis", default=0, help="Axis (4 for all three)"),
dict(short ="-o", longname="--host", action="store", type=str,
dest="host", default=None, help="IP Address to bind on"),
- "pf",
+ "ds",
)
name = "mapserver"
@@ -1155,14 +1155,14 @@
"""
def __call__(self, args):
- pf = args.pf
+ ds = args.ds
if args.axis == 4:
print "Doesn't work with multiple axes!"
return
if args.projection:
- p = ProjectionPlot(pf, args.axis, args.field, weight_field=args.weight)
+ p = ProjectionPlot(ds, args.axis, args.field, weight_field=args.weight)
else:
- p = SlicePlot(pf, args.axis, args.field)
+ p = SlicePlot(ds, args.axis, args.field)
from yt.gui.reason.pannable_map import PannableMapServer
mapper = PannableMapServer(p.data_source, args.field)
import yt.extern.bottle as bottle
@@ -1264,7 +1264,7 @@
class YTPlotCmd(YTCommand):
args = ("width", "unit", "bn", "proj", "center", "zlim", "axis", "field",
- "weight", "skip", "cmap", "output", "grids", "time", "pf", "max",
+ "weight", "skip", "cmap", "output", "grids", "time", "ds", "max",
"log", "linear")
name = "plot"
@@ -1276,18 +1276,18 @@
"""
def __call__(self, args):
- pf = args.pf
+ ds = args.ds
center = args.center
if args.center == (-1,-1,-1):
mylog.info("No center fed in; seeking.")
- v, center = pf.h.find_max("density")
+ v, center = ds.find_max("density")
if args.max:
- v, center = pf.h.find_max("density")
+ v, center = ds.find_max("density")
elif args.center is None:
- center = 0.5*(pf.domain_left_edge + pf.domain_right_edge)
+ center = 0.5*(ds.domain_left_edge + ds.domain_right_edge)
center = np.array(center)
- if pf.dimensionality < 3:
- dummy_dimensions = np.nonzero(pf.index.grids[0].ActiveDimensions <= 1)
+ if ds.dimensionality < 3:
+ dummy_dimensions = np.nonzero(ds.index.grids[0].ActiveDimensions <= 1)
axes = ensure_list(dummy_dimensions[0][0])
elif args.axis == 4:
axes = range(3)
@@ -1305,16 +1305,16 @@
for ax in axes:
mylog.info("Adding plot for axis %i", ax)
if args.projection:
- plt = ProjectionPlot(pf, ax, args.field, center=center,
+ plt = ProjectionPlot(ds, ax, args.field, center=center,
width=width,
weight_field=args.weight)
else:
- plt = SlicePlot(pf, ax, args.field, center=center,
+ plt = SlicePlot(ds, ax, args.field, center=center,
width=width)
if args.grids:
plt.annotate_grids()
if args.time:
- time = pf.current_time*pf['years']
+ time = ds.current_time*ds['years']
plt.annotate_text((0.2,0.8), 't = %5.2e yr'%time)
plt.set_cmap(args.field, args.cmap)
@@ -1322,13 +1322,13 @@
if args.zlim:
plt.set_zlim(args.field,*args.zlim)
ensure_dir_exists(args.output)
- plt.save(os.path.join(args.output,"%s" % (pf)))
+ plt.save(os.path.join(args.output,"%s" % (ds)))
class YTRenderCmd(YTCommand):
args = ("width", "unit", "center","enhance",'outputfn',
"field", "cmap", "contours", "viewpoint", "linear",
- "pixels", "up", "valrange", "log","contour_width", "pf")
+ "pixels", "up", "valrange", "log","contour_width", "ds")
name = "render"
description = \
"""
@@ -1336,13 +1336,13 @@
"""
def __call__(self, args):
- pf = args.pf
+ ds = args.ds
center = args.center
if args.center == (-1,-1,-1):
mylog.info("No center fed in; seeking.")
- v, center = pf.h.find_max("density")
+ v, center = ds.find_max("density")
elif args.center is None:
- center = 0.5*(pf.domain_left_edge + pf.domain_right_edge)
+ center = 0.5*(ds.domain_left_edge + ds.domain_right_edge)
center = np.array(center)
L = args.viewpoint
@@ -1355,8 +1355,8 @@
unit = '1'
width = args.width
if width is None:
- width = 0.5*(pf.domain_right_edge - pf.domain_left_edge)
- width /= pf[unit]
+ width = 0.5*(ds.domain_right_edge - ds.domain_left_edge)
+ width /= ds[unit]
N = args.pixels
if N is None:
@@ -1376,7 +1376,7 @@
myrange = args.valrange
if myrange is None:
- roi = pf.region(center, center-width, center+width)
+ roi = ds.region(center, center-width, center+width)
mi, ma = roi.quantities['Extrema'](field)[0]
if log:
mi, ma = np.log10(mi), np.log10(ma)
@@ -1395,7 +1395,7 @@
tf = ColorTransferFunction((mi-2, ma+2))
tf.add_layers(n_contours,w=contour_width,col_bounds = (mi,ma), colormap=cmap)
- cam = pf.h.camera(center, L, width, (N,N), transfer_function=tf, fields=[field])
+ cam = ds.camera(center, L, width, (N,N), transfer_function=tf, fields=[field])
image = cam.snapshot()
if args.enhance:
@@ -1405,7 +1405,7 @@
save_name = args.output
if save_name is None:
- save_name = "%s"%pf+"_"+field+"_rendering.png"
+ save_name = "%s"%ds+"_"+field+"_rendering.png"
if not '.png' in save_name:
save_name += '.png'
if cam.comm.rank != -1:
@@ -1512,7 +1512,7 @@
dict(short = "-r", longname = "--remote", action = "store_true",
default = False, dest="use_pyro",
help = "Use with a remote Pyro4 server."),
- "opf"
+ "ods"
)
description = \
"""
@@ -1548,18 +1548,18 @@
from yt.gui.reason.bottle_mods import uuid_serve_functions, PayloadHandler
hr = ExtDirectREPL(reasonjs_path, use_pyro=args.use_pyro)
hr.debug = PayloadHandler.debug = args.debug
- command_line = ["pfs = []"]
+ command_line = ["datasets = []"]
if args.find:
# We just have to find them and store references to them.
for fn in sorted(glob.glob("*/*.index")):
- command_line.append("pfs.append(load('%s'))" % fn[:-10])
+ command_line.append("datasets.append(load('%s'))" % fn[:-10])
hr.execute("\n".join(command_line))
bottle.debug()
uuid_serve_functions(open_browser=args.open_browser,
port=int(args.port), repl=hr)
class YTStatsCmd(YTCommand):
- args = ('outputfn','bn','skip','pf','field',
+ args = ('outputfn','bn','skip','ds','field',
dict(longname="--max", action='store_true', default=False,
dest='max', help="Display maximum of field requested through -f option."),
dict(longname="--min", action='store_true', default=False,
@@ -1575,22 +1575,22 @@
"""
def __call__(self, args):
- pf = args.pf
- pf.h.print_stats()
+ ds = args.ds
+ ds.print_stats()
vals = {}
- if args.field in pf.derived_field_list:
+ if args.field in ds.derived_field_list:
if args.max == True:
- vals['min'] = pf.h.find_max(args.field)
+ vals['min'] = ds.find_max(args.field)
print "Maximum %s: %0.5e at %s" % (args.field,
vals['min'][0], vals['min'][1])
if args.min == True:
- vals['max'] = pf.h.find_min(args.field)
+ vals['max'] = ds.find_min(args.field)
print "Minimum %s: %0.5e at %s" % (args.field,
vals['max'][0], vals['max'][1])
if args.output is not None:
- t = pf.current_time * pf['years']
+ t = ds.current_time * ds['years']
with open(args.output, "a") as f:
- f.write("%s (%0.5e years)\n" % (pf, t))
+ f.write("%s (%0.5e years)\n" % (ds, t))
if 'min' in vals:
f.write('Minimum %s is %0.5e at %s\n' % (
args.field, vals['min'][0], vals['min'][1]))
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/exceptions.py Sun Jun 15 19:50:51 2014 -0700
@@ -18,9 +18,9 @@
import os.path
class YTException(Exception):
- def __init__(self, message = None, pf = None):
+ def __init__(self, message = None, ds = None):
Exception.__init__(self, message)
- self.pf = pf
+ self.ds = ds
# Data access exceptions:
@@ -34,8 +34,8 @@
self.args, self.kwargs)
class YTSphereTooSmall(YTException):
- def __init__(self, pf, radius, smallest_cell):
- YTException.__init__(self, pf=pf)
+ def __init__(self, ds, radius, smallest_cell):
+ YTException.__init__(self, ds=ds)
self.radius = radius
self.smallest_cell = smallest_cell
@@ -60,16 +60,16 @@
return s
class YTFieldNotFound(YTException):
- def __init__(self, fname, pf):
+ def __init__(self, fname, ds):
self.fname = fname
- self.pf = pf
+ self.ds = ds
def __str__(self):
- return "Could not find field '%s' in %s." % (self.fname, self.pf)
+ return "Could not find field '%s' in %s." % (self.fname, self.ds)
class YTCouldNotGenerateField(YTFieldNotFound):
def __str__(self):
- return "Could field '%s' in %s could not be generated." % (self.fname, self.pf)
+ return "Could field '%s' in %s could not be generated." % (self.fname, self.ds)
class YTFieldTypeNotFound(YTException):
def __init__(self, fname):
@@ -118,21 +118,21 @@
return self.message
class MissingParameter(YTException):
- def __init__(self, pf, parameter):
- YTException.__init__(self, pf=pf)
+ def __init__(self, ds, parameter):
+ YTException.__init__(self, ds=ds)
self.parameter = parameter
def __str__(self):
- return "Parameter file %s is missing %s parameter." % \
- (self.pf, self.parameter)
+ return "dataset %s is missing %s parameter." % \
+ (self.ds, self.parameter)
class NoStoppingCondition(YTException):
- def __init__(self, pf):
- YTException.__init__(self, pf=pf)
+ def __init__(self, ds):
+ YTException.__init__(self, ds=ds)
def __str__(self):
return "Simulation %s has no stopping condition. StopTime or StopCycle should be set." % \
- self.pf
+ self.ds
class YTNotInsideNotebook(YTException):
def __str__(self):
@@ -154,7 +154,7 @@
self.unit = unit
def __str__(self):
- return "This parameter file doesn't recognize %s" % self.unit
+ return "This dataset doesn't recognize %s" % self.unit
class YTUnitOperationError(YTException, ValueError):
def __init__(self, operation, unit1, unit2=None):
@@ -237,8 +237,8 @@
str(self.path)
class YTEllipsoidOrdering(YTException):
- def __init__(self, pf, A, B, C):
- YTException.__init__(self, pf=pf)
+ def __init__(self, ds, A, B, C):
+ YTException.__init__(self, ds=ds)
self._A = A
self._B = B
self._C = C
@@ -335,14 +335,14 @@
return v
class YTObjectNotImplemented(YTException):
- def __init__(self, pf, obj_name):
- self.pf = pf
+ def __init__(self, ds, obj_name):
+ self.ds = ds
self.obj_name = obj_name
def __str__(self):
- v = r"The object type '%s' is not implemented for the parameter file "
+ v = r"The object type '%s' is not implemented for the dataset "
v += r"'%s'."
- return v % (self.obj_name, self.pf)
+ return v % (self.obj_name, self.ds)
class YTRockstarMultiMassNotSupported(YTException):
def __init__(self, mi, ma, ptype):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/fits_image.py
--- a/yt/utilities/fits_image.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/fits_image.py Sun Jun 15 19:50:51 2014 -0700
@@ -252,7 +252,7 @@
axis_wcs = [[1,2],[0,2],[0,1]]
def construct_image(data_source):
- ds = data_source.pf
+ ds = data_source.ds
axis = data_source.axis
if hasattr(ds, "wcs"):
# This is a FITS dataset
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/flagging_methods.py
--- a/yt/utilities/flagging_methods.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/flagging_methods.py Sun Jun 15 19:50:51 2014 -0700
@@ -35,7 +35,7 @@
self.over_density = over_density
def __call__(self, grid):
- rho = grid["density"] / (grid.pf.refine_by**grid.Level)
+ rho = grid["density"] / (grid.ds.refine_by**grid.Level)
return (rho > self.over_density)
class FlaggingGrid(object):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/grid_data_format/tests/test_writer.py
--- a/yt/utilities/grid_data_format/tests/test_writer.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/grid_data_format/tests/test_writer.py Sun Jun 15 19:50:51 2014 -0700
@@ -17,7 +17,7 @@
import os
import h5py as h5
from yt.testing import \
- fake_random_pf, assert_equal
+ fake_random_ds, assert_equal
from yt.utilities.grid_data_format.writer import \
write_to_gdf
from yt.frontends.gdf.data_structures import \
@@ -41,10 +41,10 @@
tmpfile = os.path.join(tmpdir, 'test_gdf.h5')
try:
- test_pf = fake_random_pf(64)
- write_to_gdf(test_pf, tmpfile, data_author=TEST_AUTHOR,
+ test_ds = fake_random_ds(64)
+ write_to_gdf(test_ds, tmpfile, data_author=TEST_AUTHOR,
data_comment=TEST_COMMENT)
- del test_pf
+ del test_ds
assert isinstance(load(tmpfile), GDFDataset)
h5f = h5.File(tmpfile, 'r')
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/grid_data_format/writer.py Sun Jun 15 19:50:51 2014 -0700
@@ -20,40 +20,40 @@
from yt import __version__ as yt_version
-def write_to_gdf(pf, gdf_path, data_author=None, data_comment=None,
+def write_to_gdf(ds, gdf_path, data_author=None, data_comment=None,
particle_type_name="dark_matter"):
"""
- Write a parameter file to the given path in the Grid Data Format.
+ Write a dataset to the given path in the Grid Data Format.
Parameters
----------
- pf : Dataset object
+ ds : Dataset object
The yt data to write out.
gdf_path : string
The path of the file to output.
"""
- f = _create_new_gdf(pf, gdf_path, data_author, data_comment,
+ f = _create_new_gdf(ds, gdf_path, data_author, data_comment,
particle_type_name)
# now add the fields one-by-one
- for field_name in pf.field_list:
- _write_field_to_gdf(pf, f, field_name, particle_type_name)
+ for field_name in ds.field_list:
+ _write_field_to_gdf(ds, f, field_name, particle_type_name)
# don't forget to close the file.
f.close()
-def save_field(pf, field_name, field_parameters=None):
+def save_field(ds, field_name, field_parameters=None):
"""
- Write a single field associated with the parameter file pf to the
+ Write a single field associated with the dataset ds to the
backup file.
Parameters
----------
- pf : Dataset object
- The yt parameter file that the field is associated with.
+ ds : Dataset object
+ The yt dataset that the field is associated with.
field_name : string
The name of the field to save.
field_parameters : dictionary
@@ -62,30 +62,30 @@
if isinstance(field_name, tuple):
field_name = field_name[1]
- field_obj = pf._get_field_info(field_name)
+ field_obj = ds._get_field_info(field_name)
if field_obj.particle_type:
print("Saving particle fields currently not supported.")
return
- backup_filename = pf.backup_filename
+ backup_filename = ds.backup_filename
if os.path.exists(backup_filename):
# backup file already exists, open it
f = h5py.File(backup_filename, "r+")
else:
# backup file does not exist, create it
- f = _create_new_gdf(pf, backup_filename, data_author=None,
+ f = _create_new_gdf(ds, backup_filename, data_author=None,
data_comment=None,
particle_type_name="dark_matter")
# now save the field
- _write_field_to_gdf(pf, f, field_name, particle_type_name="dark_matter",
+ _write_field_to_gdf(ds, f, field_name, particle_type_name="dark_matter",
field_parameters=field_parameters)
# don't forget to close the file.
f.close()
-def _write_field_to_gdf(pf, fhandle, field_name, particle_type_name,
+def _write_field_to_gdf(ds, fhandle, field_name, particle_type_name,
field_parameters=None):
# add field info to field_types group
@@ -93,7 +93,7 @@
# create the subgroup with the field's name
if isinstance(field_name, tuple):
field_name = field_name[1]
- fi = pf._get_field_info(field_name)
+ fi = ds._get_field_info(field_name)
try:
sg = g.create_group(field_name)
except ValueError:
@@ -120,7 +120,7 @@
# now add actual data, grid by grid
g = fhandle["data"]
- for grid in pf.index.grids:
+ for grid in ds.index.grids:
# set field parameters, if specified
if field_parameters is not None:
@@ -139,7 +139,7 @@
grid_group[field_name] = grid[field_name]
-def _create_new_gdf(pf, gdf_path, data_author=None, data_comment=None,
+def _create_new_gdf(ds, gdf_path, data_author=None, data_comment=None,
particle_type_name="dark_matter"):
# Make sure we have the absolute path to the file first
gdf_path = os.path.abspath(gdf_path)
@@ -170,14 +170,14 @@
# "simulation_parameters" group
###
g = f.create_group("simulation_parameters")
- g.attrs["refine_by"] = pf.refine_by
- g.attrs["dimensionality"] = pf.dimensionality
- g.attrs["domain_dimensions"] = pf.domain_dimensions
- g.attrs["current_time"] = pf.current_time
- g.attrs["domain_left_edge"] = pf.domain_left_edge
- g.attrs["domain_right_edge"] = pf.domain_right_edge
- g.attrs["unique_identifier"] = pf.unique_identifier
- g.attrs["cosmological_simulation"] = pf.cosmological_simulation
+ g.attrs["refine_by"] = ds.refine_by
+ g.attrs["dimensionality"] = ds.dimensionality
+ g.attrs["domain_dimensions"] = ds.domain_dimensions
+ g.attrs["current_time"] = ds.current_time
+ g.attrs["domain_left_edge"] = ds.domain_left_edge
+ g.attrs["domain_right_edge"] = ds.domain_right_edge
+ g.attrs["unique_identifier"] = ds.unique_identifier
+ g.attrs["cosmological_simulation"] = ds.cosmological_simulation
# @todo: Where is this in the yt API?
g.attrs["num_ghost_zones"] = 0
# @todo: Where is this in the yt API?
@@ -185,11 +185,11 @@
# @todo: not yet supported by yt.
g.attrs["boundary_conditions"] = np.array([0, 0, 0, 0, 0, 0], 'int32')
- if pf.cosmological_simulation:
- g.attrs["current_redshift"] = pf.current_redshift
- g.attrs["omega_matter"] = pf.omega_matter
- g.attrs["omega_lambda"] = pf.omega_lambda
- g.attrs["hubble_constant"] = pf.hubble_constant
+ if ds.cosmological_simulation:
+ g.attrs["current_redshift"] = ds.current_redshift
+ g.attrs["omega_matter"] = ds.omega_matter
+ g.attrs["omega_lambda"] = ds.omega_lambda
+ g.attrs["hubble_constant"] = ds.hubble_constant
###
# "field_types" group
@@ -208,21 +208,21 @@
###
# root datasets -- info about the grids
###
- f["grid_dimensions"] = pf.index.grid_dimensions
+ f["grid_dimensions"] = ds.index.grid_dimensions
f["grid_left_index"] = np.array(
- [grid.get_global_startindex() for grid in pf.index.grids]
- ).reshape(pf.index.grid_dimensions.shape[0], 3)
- f["grid_level"] = pf.index.grid_levels
+ [grid.get_global_startindex() for grid in ds.index.grids]
+ ).reshape(ds.index.grid_dimensions.shape[0], 3)
+ f["grid_level"] = ds.index.grid_levels
# @todo: Fill with proper values
- f["grid_parent_id"] = -np.ones(pf.index.grid_dimensions.shape[0])
- f["grid_particle_count"] = pf.index.grid_particle_count
+ f["grid_parent_id"] = -np.ones(ds.index.grid_dimensions.shape[0])
+ f["grid_particle_count"] = ds.index.grid_particle_count
###
# "data" group -- where we should spend the most time
###
g = f.create_group("data")
- for grid in pf.index.grids:
+ for grid in ds.index.grids:
# add group for this grid
grid_group = g.create_group("grid_%010i" % (grid.id - grid._id_offset))
# add group for the particles on this grid
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/initial_conditions.py
--- a/yt/utilities/initial_conditions.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/initial_conditions.py Sun Jun 15 19:50:51 2014 -0700
@@ -17,8 +17,8 @@
from yt.units.yt_array import YTQuantity
class FluidOperator(object):
- def apply(self, pf):
- for g in pf.index.grids: self(g)
+ def apply(self, ds):
+ for g in ds.index.grids: self(g)
class TopHatSphere(FluidOperator):
def __init__(self, radius, center, fields):
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/io_handler.py
--- a/yt/utilities/io_handler.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/io_handler.py Sun Jun 15 19:50:51 2014 -0700
@@ -38,9 +38,9 @@
_dataset_type = None
_particle_reader = False
- def __init__(self, pf):
+ def __init__(self, ds):
self.queue = defaultdict(dict)
- self.pf = pf
+ self.ds = ds
self._last_selector_id = None
self._last_selector_counts = None
@@ -82,8 +82,8 @@
def _read_data_set(self, grid, field):
# check backup file first. if field not found,
# call frontend-specific io method
- backup_filename = grid.pf.backup_filename
- if not grid.pf.read_from_backup:
+ backup_filename = grid.ds.backup_filename
+ if not grid.ds.read_from_backup:
return self._read_data(grid, field)
elif self._field_in_backup(grid, backup_filename, field):
fhandle = h5py.File(backup_filename, 'r')
@@ -125,7 +125,7 @@
fsize = defaultdict(lambda: 0) # COUNT RV
field_maps = defaultdict(list) # ptypes -> fields
chunks = list(chunks)
- unions = self.pf.particle_unions
+ unions = self.ds.particle_unions
# What we need is a mapping from particle types to return types
for field in fields:
ftype, fname = field
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/lib/tests/test_geometry_utils.py
--- a/yt/utilities/lib/tests/test_geometry_utils.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/lib/tests/test_geometry_utils.py Sun Jun 15 19:50:51 2014 -0700
@@ -4,10 +4,10 @@
_fields = ("density", "velocity_x", "velocity_y", "velocity_z")
def test_obtain_rvec():
- pf = fake_random_pf(64, nprocs=8, fields=_fields,
+ ds = fake_random_ds(64, nprocs=8, fields=_fields,
negative = [False, True, True, True])
- dd = pf.sphere((0.5,0.5,0.5), 0.2)
+ dd = ds.sphere((0.5,0.5,0.5), 0.2)
coords = obtain_rvec(dd)
@@ -18,10 +18,10 @@
assert_array_less(0.0, r.min())
def test_obtain_rv_vec():
- pf = fake_random_pf(64, nprocs=8, fields=_fields,
+ ds = fake_random_ds(64, nprocs=8, fields=_fields,
negative = [False, True, True, True])
- dd = pf.h.all_data()
+ dd = ds.all_data()
vels = obtain_rv_vec(dd)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/lib/tests/test_grid_tree.py
--- a/yt/utilities/lib/tests/test_grid_tree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/lib/tests/test_grid_tree.py Sun Jun 15 19:50:51 2014 -0700
@@ -23,7 +23,7 @@
def setup():
"""Prepare setup specific environment"""
- global test_pf
+ global test_ds
grid_data = [
dict(left_edge=[0.0, 0.0, 0.0], right_edge=[1.0, 1.0, 1.],
@@ -45,24 +45,24 @@
for grid in grid_data:
grid["density"] = \
np.random.random(grid["dimensions"]) * 2 ** grid["level"]
- test_pf = load_amr_grids(grid_data, [16, 16, 16], 1.0)
+ test_ds = load_amr_grids(grid_data, [16, 16, 16], 1.0)
def test_grid_tree():
"""Main test suite for GridTree"""
- grid_tree = test_pf.index.get_grid_tree()
+ grid_tree = test_ds.index.get_grid_tree()
indices, levels, nchild, children = grid_tree.return_tree_info()
- grid_levels = [grid.Level for grid in test_pf.index.grids]
+ grid_levels = [grid.Level for grid in test_ds.index.grids]
- grid_indices = [grid.id - grid._id_offset for grid in test_pf.index.grids]
- grid_nchild = [len(grid.Children) for grid in test_pf.index.grids]
+ grid_indices = [grid.id - grid._id_offset for grid in test_ds.index.grids]
+ grid_nchild = [len(grid.Children) for grid in test_ds.index.grids]
yield assert_equal, levels, grid_levels
yield assert_equal, indices, grid_indices
yield assert_equal, nchild, grid_nchild
- for i, grid in enumerate(test_pf.index.grids):
+ for i, grid in enumerate(test_ds.index.grids):
if grid_nchild[i] > 0:
grid_children = np.array([child.id - child._id_offset
for child in grid.Children])
@@ -72,17 +72,17 @@
def test_find_points():
"""Main test suite for MatchPoints"""
num_points = 100
- randx = np.random.uniform(low=test_pf.domain_left_edge[0],
- high=test_pf.domain_right_edge[0],
+ randx = np.random.uniform(low=test_ds.domain_left_edge[0],
+ high=test_ds.domain_right_edge[0],
size=num_points)
- randy = np.random.uniform(low=test_pf.domain_left_edge[1],
- high=test_pf.domain_right_edge[1],
+ randy = np.random.uniform(low=test_ds.domain_left_edge[1],
+ high=test_ds.domain_right_edge[1],
size=num_points)
- randz = np.random.uniform(low=test_pf.domain_left_edge[2],
- high=test_pf.domain_right_edge[2],
+ randz = np.random.uniform(low=test_ds.domain_left_edge[2],
+ high=test_ds.domain_right_edge[2],
size=num_points)
- point_grids, point_grid_inds = test_pf.index.find_points(randx, randy, randz)
+ point_grids, point_grid_inds = test_ds.index.find_points(randx, randy, randz)
grid_inds = np.zeros((num_points), dtype='int64')
@@ -91,7 +91,7 @@
pos = np.array([ixx, iyy, izz])
pt_level = -1
- for grid in test_pf.index.grids:
+ for grid in test_ds.index.grids:
if np.all(pos >= grid.LeftEdge) and \
np.all(pos <= grid.RightEdge) and \
@@ -102,18 +102,18 @@
yield assert_equal, point_grid_inds, grid_inds
# Test wheter find_points works for lists
- point_grids, point_grid_inds = test_pf.index.find_points(randx.tolist(),
+ point_grids, point_grid_inds = test_ds.index.find_points(randx.tolist(),
randy.tolist(),
randz.tolist())
yield assert_equal, point_grid_inds, grid_inds
# Test if find_points works for scalar
ind = random.randint(0, num_points - 1)
- point_grids, point_grid_inds = test_pf.index.find_points(randx[ind],
+ point_grids, point_grid_inds = test_ds.index.find_points(randx[ind],
randy[ind],
randz[ind])
yield assert_equal, point_grid_inds, grid_inds[ind]
# Test if find_points fails properly for non equal indices' array sizes
- yield assert_raises, AssertionError, test_pf.index.find_points, \
+ yield assert_raises, AssertionError, test_ds.index.find_points, \
[0], 1.0, [2, 3]
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/linear_interpolators.py
--- a/yt/utilities/linear_interpolators.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/linear_interpolators.py Sun Jun 15 19:50:51 2014 -0700
@@ -40,7 +40,7 @@
Examples
--------
- ad = pf.h.all_data()
+ ad = ds.all_data()
table_data = np.random.random(64)
interp = UnilinearFieldInterpolator(table_data, (0.0, 1.0), "x",
truncate=True)
@@ -98,7 +98,7 @@
Examples
--------
- ad = pf.h.all_data()
+ ad = ds.all_data()
table_data = np.random.random((64, 64))
interp = BilinearFieldInterpolator(table_data, (0.0, 1.0, 0.0, 1.0),
["x", "y"],
@@ -171,7 +171,7 @@
Examples
--------
- ad = pf.h.all_data()
+ ad = ds.all_data()
table_data = np.random.random((64, 64, 64))
interp = BilinearFieldInterpolator(table_data,
(0.0, 1.0, 0.0, 1.0, 0.0, 1.0),
@@ -235,7 +235,7 @@
my_vals.shape = orig_shape
return my_vals
-def get_centers(pf, filename, center_cols, radius_col, unit='1'):
+def get_centers(ds, filename, center_cols, radius_col, unit='1'):
"""
Return an iterator over EnzoSphere objects generated from the appropriate
columns in *filename*. Optionally specify the *unit* radius is in.
@@ -246,4 +246,4 @@
vals = line.split()
x,y,z = [float(vals[i]) for i in center_cols]
r = float(vals[radius_col])
- yield pf.sphere([x,y,z], r/pf[unit])
+ yield ds.sphere([x,y,z], r/ds[unit])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/math_utils.py
--- a/yt/utilities/math_utils.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/math_utils.py Sun Jun 15 19:50:51 2014 -0700
@@ -53,7 +53,7 @@
np.dtype('complex128'): np.complex128,
}
-def periodic_position(pos, pf):
+def periodic_position(pos, ds):
r"""Assuming periodicity, find the periodic position within the domain.
Parameters
@@ -61,21 +61,21 @@
pos : array
An array of floats.
- pf : Dataset
+ ds : Dataset
A simulation static output.
Examples
--------
>>> a = np.array([1.1, 0.5, 0.5])
>>> data = {'Density':np.ones([32,32,32])}
- >>> pf = load_uniform_grid(data, [32,32,32], 1.0)
- >>> ppos = periodic_position(a, pf)
+ >>> ds = load_uniform_grid(data, [32,32,32], 1.0)
+ >>> ppos = periodic_position(a, ds)
>>> ppos
array([ 0.1, 0.5, 0.5])
"""
- off = (pos - pf.domain_left_edge) % pf.domain_width
- return pf.domain_left_edge + off
+ off = (pos - ds.domain_left_edge) % ds.domain_width
+ return ds.domain_left_edge + off
def periodic_dist(a, b, period, periodicity=(True, True, True)):
r"""Find the Euclidean periodic distance between two sets of points.
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/minimal_representation.py
--- a/yt/utilities/minimal_representation.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/minimal_representation.py Sun Jun 15 19:50:51 2014 -0700
@@ -48,9 +48,9 @@
def _update_attrs(self, obj, attr_list):
for attr in attr_list:
setattr(self, attr, getattr(obj, attr, None))
- if hasattr(obj, "pf"):
- self.output_hash = obj.pf._hash()
- self._pf_mrep = obj.pf._mrep
+ if hasattr(obj, "ds"):
+ self.output_hash = obj.ds._hash()
+ self._ds_mrep = obj.ds._mrep
def __init__(self, obj):
self._update_attrs(obj, self._attr_list)
@@ -87,8 +87,8 @@
url = ytcfg.get("yt","hub_url")
if api_key == '': raise YTHubRegisterError
metadata, (final_name, chunks) = self._generate_post()
- if hasattr(self, "_pf_mrep"):
- self._pf_mrep.upload()
+ if hasattr(self, "_ds_mrep"):
+ self._ds_mrep.upload()
for i in metadata:
if isinstance(metadata[i], np.ndarray):
metadata[i] = metadata[i].tolist()
@@ -230,8 +230,8 @@
return (metadata, ("chunks", chunks))
class ImageCollection(object):
- def __init__(self, pf, name):
- self.pf = pf
+ def __init__(self, ds, name):
+ self.ds = ds
self.name = name
self.images = []
self.image_metadata = []
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/parallel_tools/io_runner.py
--- a/yt/utilities/parallel_tools/io_runner.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/parallel_tools/io_runner.py Sun Jun 15 19:50:51 2014 -0700
@@ -30,22 +30,22 @@
YT_TAG_MESSAGE = 317 # Cell 317 knows where to go
class IOCommunicator(BaseIOHandler):
- def __init__(self, pf, wg, pool):
+ def __init__(self, ds, wg, pool):
mylog.info("Initializing IOCommunicator")
- self.pf = pf
+ self.ds = ds
self.wg = wg # We don't need to use this!
self.pool = pool
self.comm = pool.comm
# We read our grids here
self.grids = []
storage = {}
- grids = pf.index.grids.tolist()
+ grids = ds.index.grids.tolist()
grids.sort(key=lambda a:a.filename)
for sto, g in parallel_objects(grids, storage = storage):
sto.result = self.comm.rank
sto.result_id = g.id
self.grids.append(g)
- self._id_offset = pf.index.grids[0]._id_offset
+ self._id_offset = ds.index.grids[0]._id_offset
mylog.info("Reading from disk ...")
self.initialize_data()
mylog.info("Broadcasting ...")
@@ -54,15 +54,15 @@
self.hooks = []
def initialize_data(self):
- pf = self.pf
- fields = [f for f in pf.field_list
- if not pf.field_info[f].particle_type]
- pfields = [f for f in pf.field_list
- if pf.field_info[f].particle_type]
+ ds = self.ds
+ fields = [f for f in ds.field_list
+ if not ds.field_info[f].particle_type]
+ dsields = [f for f in ds.field_list
+ if ds.field_info[f].particle_type]
# Preload is only defined for Enzo ...
- if pf.h.io._dataset_type == "enzo_packed_3d":
- self.queue = pf.h.io.queue
- pf.h.io.preload(self.grids, fields)
+ if ds.index.io._dataset_type == "enzo_packed_3d":
+ self.queue = ds.index.io.queue
+ ds.index.io.preload(self.grids, fields)
for g in self.grids:
for f in fields:
if f not in self.queue[g.id]:
@@ -74,16 +74,16 @@
self.queue = {}
for g in self.grids:
for f in fields + pfields:
- self.queue[g.id][f] = pf.h.io._read(g, f)
+ self.queue[g.id][f] = ds.index.io._read(g, f)
def _read(self, g, f):
- fi = self.pf.field_info[f]
+ fi = self.ds.field_info[f]
if fi.particle_type and g.NumberOfParticles == 0:
# because this gets upcast to float
return np.array([],dtype='float64')
try:
- temp = self.pf.h.io._read_data_set(g, f)
- except:# self.pf.index.io._read_exception as exc:
+ temp = self.ds.index.io._read_data_set(g, f)
+ except:# self.ds.index.io._read_exception as exc:
if fi.not_in_all:
temp = np.zeros(g.ActiveDimensions, dtype='float64')
else:
@@ -116,8 +116,8 @@
class IOHandlerRemote(BaseIOHandler):
_dataset_type = "remote"
- def __init__(self, pf, wg, pool):
- self.pf = pf
+ def __init__(self, ds, wg, pool):
+ self.ds = ds
self.wg = wg # probably won't need
self.pool = pool
self.comm = pool.comm
@@ -129,7 +129,7 @@
dest = self.proc_map[grid.id]
msg = dict(grid_id = grid.id, field = field, op="read")
mylog.debug("Requesting %s for %s from %s", field, grid, dest)
- if self.pf.field_info[field].particle_type:
+ if self.ds.field_info[field].particle_type:
data = np.empty(grid.NumberOfParticles, 'float64')
else:
data = np.empty(grid.ActiveDimensions, 'float64')
@@ -153,24 +153,24 @@
self.comm.comm.send(msg, dest=rank, tag=YT_TAG_MESSAGE)
@contextmanager
-def remote_io(pf, wg, pool):
- original_io = pf.h.io
- pf.h.io = IOHandlerRemote(pf, wg, pool)
+def remote_io(ds, wg, pool):
+ original_io = ds.index.io
+ ds.index.io = IOHandlerRemote(ds, wg, pool)
yield
- pf.h.io.terminate()
- pf.h.io = original_io
+ ds.index.io.terminate()
+ ds.index.io = original_io
def io_nodes(fn, n_io, n_work, func, *args, **kwargs):
from yt.mods import load
pool, wg = ProcessorPool.from_sizes([(n_io, "io"), (n_work, "work")])
rv = None
if wg.name == "work":
- pf = load(fn)
- with remote_io(pf, wg, pool):
- rv = func(pf, *args, **kwargs)
+ ds = load(fn)
+ with remote_io(ds, wg, pool):
+ rv = func(ds, *args, **kwargs)
elif wg.name == "io":
- pf = load(fn)
- io = IOCommunicator(pf, wg, pool)
+ ds = load(fn)
+ io = IOCommunicator(ds, wg, pool)
io.wait()
# We should broadcast the result
rv = pool.comm.mpi_bcast(rv, root=pool['work'].ranks[0])
@@ -180,8 +180,8 @@
# Here is an example of how to use this functionality.
if __name__ == "__main__":
- def gq(pf):
- dd = pf.h.all_data()
+ def gq(ds):
+ dd = ds.all_data()
return dd.quantities["TotalQuantity"]("CellMassMsun")
q = io_nodes("DD0087/DD0087", 8, 24, gq)
mylog.info(q)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/parallel_tools/parallel_analysis_interface.py
--- a/yt/utilities/parallel_tools/parallel_analysis_interface.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/parallel_tools/parallel_analysis_interface.py Sun Jun 15 19:50:51 2014 -0700
@@ -370,7 +370,7 @@
Calls to this function can be nested.
- This should not be used to iterate over parameter files --
+ This should not be used to iterate over datasets --
:class:`~yt.data_objects.time_series.DatasetSeries` provides a much nicer
interface for that.
@@ -383,7 +383,7 @@
each available processor.
storage : dict
This is a dictionary, which will be filled with results during the
- course of the iteration. The keys will be the parameter file
+ course of the iteration. The keys will be the dataset
indices and the values will be whatever is assigned to the *result*
attribute on the storage during iteration.
barrier : bool
@@ -401,7 +401,7 @@
slice plots centered at each.
>>> for c in parallel_objects(centers):
- ... SlicePlot(pf, "x", "Density", center = c).save()
+ ... SlicePlot(ds, "x", "Density", center = c).save()
...
Here's an example of calculating the angular momentum vector of a set of
@@ -410,7 +410,7 @@
>>> storage = {}
>>> for sto, c in parallel_objects(centers, njobs=4, storage=storage):
- ... sp = pf.sphere(c, (100, "kpc"))
+ ... sp = ds.sphere(c, (100, "kpc"))
... sto.result = sp.quantities["AngularMomentumVector"]()
...
>>> for sphere_id, L in sorted(storage.items()):
@@ -1045,11 +1045,11 @@
def get_dependencies(self, fields):
deps = []
- fi = self.pf.field_info
+ fi = self.ds.field_info
for field in fields:
if any(getattr(v,"ghost_zones", 0) > 0 for v in
fi[field].validators): continue
- deps += ensure_list(fi[field].get_dependencies(pf=self.pf).requested)
+ deps += ensure_list(fi[field].get_dependencies(ds=self.ds).requested)
return list(set(deps))
def _initialize_parallel(self):
@@ -1064,15 +1064,15 @@
return False, self.index.grid_collection(self.center,
self.index.grids)
- xax = self.pf.coordinates.x_axis[axis]
- yax = self.pf.coordinates.y_axis[axis]
+ xax = self.ds.coordinates.x_axis[axis]
+ yax = self.ds.coordinates.y_axis[axis]
cc = MPI.Compute_dims(self.comm.size, 2)
mi = self.comm.rank
cx, cy = np.unravel_index(mi, cc)
x = np.mgrid[0:1:(cc[0]+1)*1j][cx:cx+2]
y = np.mgrid[0:1:(cc[1]+1)*1j][cy:cy+2]
- DLE, DRE = self.pf.domain_left_edge.copy(), self.pf.domain_right_edge.copy()
+ DLE, DRE = self.ds.domain_left_edge.copy(), self.ds.domain_right_edge.copy()
LE = np.ones(3, dtype='float64') * DLE
RE = np.ones(3, dtype='float64') * DRE
LE[xax] = x[0] * (DRE[xax]-DLE[xax]) + DLE[xax]
@@ -1088,8 +1088,8 @@
LE, RE = np.array(ds.left_edge), np.array(ds.right_edge)
# We need to establish if we're looking at a subvolume, in which case
# we *do* want to pad things.
- if (LE == self.pf.domain_left_edge).all() and \
- (RE == self.pf.domain_right_edge).all():
+ if (LE == self.ds.domain_left_edge).all() and \
+ (RE == self.ds.domain_right_edge).all():
subvol = False
else:
subvol = True
@@ -1103,7 +1103,7 @@
# this processor is assigned.
# The only way I really know how to do this is to get the level-0
# grid that belongs to this processor.
- grids = self.pf.h.select_grids(0)
+ grids = self.ds.index.select_grids(0)
root_grids = [g for g in grids
if g.proc_num == self.comm.rank]
if len(root_grids) != 1: raise RuntimeError
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/parameter_file_storage.py
--- a/yt/utilities/parameter_file_storage.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/parameter_file_storage.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,5 +1,5 @@
"""
-A simple CSV database for grabbing and storing parameter files
+A simple CSV database for grabbing and storing datasets
@@ -42,9 +42,9 @@
class ParameterFileStore(object):
"""
This class is designed to be a semi-persistent storage for parameter
- files. By identifying each parameter file with a unique hash, objects
- can be stored independently of parameter files -- when an object is
- loaded, the parameter file is as well, based on the hash. For
+ files. By identifying each dataset with a unique hash, objects
+ can be stored independently of datasets -- when an object is
+ loaded, the dataset is as well, based on the hash. For
storage concerns, only a few hundred will be retained in cache.
"""
@@ -62,7 +62,7 @@
def __init__(self, in_memory=False):
"""
- Create the parameter file database if yt is configured to store them.
+ Create the dataset database if yt is configured to store them.
Otherwise, use read-only settings.
"""
@@ -97,68 +97,68 @@
return os.path.abspath(base_file_name)
return os.path.expanduser("~/.yt/%s" % base_file_name)
- def get_pf_hash(self, hash):
- """ This returns a parameter file based on a hash. """
- return self._convert_pf(self._records[hash])
+ def get_ds_hash(self, hash):
+ """ This returns a dataset based on a hash. """
+ return self._convert_ds(self._records[hash])
- def get_pf_ctid(self, ctid):
- """ This returns a parameter file based on a CurrentTimeIdentifier. """
+ def get_ds_ctid(self, ctid):
+ """ This returns a dataset based on a CurrentTimeIdentifier. """
for h in self._records:
if self._records[h]['ctid'] == ctid:
- return self._convert_pf(self._records[h])
+ return self._convert_ds(self._records[h])
- def _adapt_pf(self, pf):
- """ This turns a parameter file into a CSV entry. """
- return dict(bn=pf.basename,
- fp=pf.fullpath,
- tt=pf.current_time,
- ctid=pf.unique_identifier,
- class_name=pf.__class__.__name__,
- last_seen=pf._instantiated)
+ def _adapt_ds(self, ds):
+ """ This turns a dataset into a CSV entry. """
+ return dict(bn=ds.basename,
+ fp=ds.fullpath,
+ tt=ds.current_time,
+ ctid=ds.unique_identifier,
+ class_name=ds.__class__.__name__,
+ last_seen=ds._instantiated)
- def _convert_pf(self, pf_dict):
- """ This turns a CSV entry into a parameter file. """
- bn = pf_dict['bn']
- fp = pf_dict['fp']
+ def _convert_ds(self, ds_dict):
+ """ This turns a CSV entry into a dataset. """
+ bn = ds_dict['bn']
+ fp = ds_dict['fp']
fn = os.path.join(fp, bn)
- class_name = pf_dict['class_name']
+ class_name = ds_dict['class_name']
if class_name not in output_type_registry:
raise UnknownDatasetType(class_name)
mylog.info("Checking %s", fn)
if os.path.exists(fn):
- pf = output_type_registry[class_name](os.path.join(fp, bn))
+ ds = output_type_registry[class_name](os.path.join(fp, bn))
else:
raise IOError
# This next one is to ensure that we manually update the last_seen
# record *now*, for during write_out.
- self._records[pf._hash()]['last_seen'] = pf._instantiated
- return pf
+ self._records[ds._hash()]['last_seen'] = ds._instantiated
+ return ds
- def check_pf(self, pf):
+ def check_ds(self, ds):
"""
- This will ensure that the parameter file (*pf*) handed to it is
+ This will ensure that the dataset (*ds*) handed to it is
recorded in the storage unit. In doing so, it will update path
and "last_seen" information.
"""
- hash = pf._hash()
+ hash = ds._hash()
if hash not in self._records:
- self.insert_pf(pf)
+ self.insert_ds(ds)
return
- pf_dict = self._records[hash]
- self._records[hash]['last_seen'] = pf._instantiated
- if pf_dict['bn'] != pf.basename \
- or pf_dict['fp'] != pf.fullpath:
+ ds_dict = self._records[hash]
+ self._records[hash]['last_seen'] = ds._instantiated
+ if ds_dict['bn'] != ds.basename \
+ or ds_dict['fp'] != ds.fullpath:
self.wipe_hash(hash)
- self.insert_pf(pf)
+ self.insert_ds(ds)
- def insert_pf(self, pf):
- """ This will insert a new *pf* and flush the database to disk. """
- self._records[pf._hash()] = self._adapt_pf(pf)
+ def insert_ds(self, ds):
+ """ This will insert a new *ds* and flush the database to disk. """
+ self._records[ds._hash()] = self._adapt_ds(ds)
self.flush_db()
def wipe_hash(self, hash):
"""
- This removes a *hash* corresponding to a parameter file from the
+ This removes a *hash* corresponding to a dataset from the
storage.
"""
if hash not in self._records: return
@@ -181,7 +181,7 @@
fn = self._get_db_name()
f = open("%s.tmp" % fn, 'wb')
w = csv.DictWriter(f, _field_names)
- maxn = ytcfg.getint("yt","MaximumStoredPFs") # number written
+ maxn = ytcfg.getint("yt","maximumstoreddatasets") # number written
for h,v in islice(sorted(self._records.items(),
key=lambda a: -a[1]['last_seen']), 0, maxn):
v['hash'] = h
@@ -218,7 +218,7 @@
def find_uuid(self, u):
cursor = self.conn.execute(
- "select pf_path from enzo_outputs where dset_uuid = '%s'" % (
+ "select ds_path from enzo_outputs where dset_uuid = '%s'" % (
u))
# It's a 'unique key'
result = cursor.fetchone()
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/particle_generator.py
--- a/yt/utilities/particle_generator.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/particle_generator.py Sun Jun 15 19:50:51 2014 -0700
@@ -10,15 +10,15 @@
("io", "particle_position_y"),
("io", "particle_position_z")]
- def __init__(self, pf, num_particles, field_list) :
+ def __init__(self, ds, num_particles, field_list) :
"""
Base class for generating particle fields which may be applied to
streams. Normally this would not be called directly, since it doesn't
- really do anything except allocate memory. Takes a *pf* to serve as the
+ really do anything except allocate memory. Takes a *ds* to serve as the
basis for determining grids, the number of particles *num_particles*,
and a list of fields, *field_list*.
"""
- self.pf = pf
+ self.ds = ds
self.num_particles = num_particles
self.field_list = field_list
self.field_list.append(("io", "particle_index"))
@@ -36,7 +36,7 @@
"\n".join(self.default_fields))
self.index_index = self.field_list.index(("io", "particle_index"))
- self.num_grids = self.pf.index.num_grids
+ self.num_grids = self.ds.index.num_grids
self.NumberOfParticles = np.zeros((self.num_grids), dtype='int64')
self.ParticleGridIndices = np.zeros(self.num_grids + 1, dtype='int64')
@@ -88,7 +88,7 @@
for field in self.field_list:
fi = self.field_list.index(field)
if field in self.field_units:
- tr[field] = self.pf.arr(self.particles[start:end, fi],
+ tr[field] = self.ds.arr(self.particles[start:end, fi],
self.field_units[field])
else:
tr[field] = self.particles[start:end, fi]
@@ -99,7 +99,7 @@
Assigns grids to particles and sets up particle positions. *setup_fields* is
a dict of fields other than the particle positions to set up.
"""
- particle_grids, particle_grid_inds = self.pf.index.find_points(x,y,z)
+ particle_grids, particle_grid_inds = self.ds.index.find_points(x,y,z)
idxs = np.argsort(particle_grid_inds)
self.particles[:,self.posx_index] = x[idxs]
self.particles[:,self.posy_index] = y[idxs]
@@ -138,7 +138,7 @@
>>> particles.map_grid_fields_to_particles(field_map)
"""
pbar = get_pbar("Mapping fields to particles", self.num_grids)
- for i, grid in enumerate(self.pf.index.grids) :
+ for i, grid in enumerate(self.ds.index.grids) :
pbar.update(i)
if self.NumberOfParticles[i] > 0:
start = self.ParticleGridIndices[i]
@@ -161,11 +161,11 @@
def apply_to_stream(self, clobber=False) :
"""
- Apply the particles to a stream parameter file. If particles already exist,
+ Apply the particles to a stream dataset. If particles already exist,
and clobber=False, do not overwrite them, but add the new ones to them.
"""
grid_data = []
- for i,g in enumerate(self.pf.index.grids) :
+ for i,g in enumerate(self.ds.index.grids) :
data = {}
if clobber :
data["number_of_particles"] = self.NumberOfParticles[i]
@@ -178,14 +178,14 @@
# We have particles in this grid
if g.NumberOfParticles > 0 and not clobber:
# Particles already exist
- if field in self.pf.field_list :
+ if field in self.ds.field_list :
# This field already exists
prev_particles = g[field]
else :
# This one doesn't, set the previous particles' field
# values to zero
prev_particles = np.zeros((g.NumberOfParticles))
- prev_particles = self.pf.arr(prev_particles,
+ prev_particles = self.ds.arr(prev_particles,
input_units = self.field_units[field])
data[field] = uconcatenate((prev_particles,
grid_particles[field]))
@@ -196,18 +196,18 @@
# We don't have particles in this grid
data[field] = np.array([], dtype='float64')
grid_data.append(data)
- self.pf.index.update_data(grid_data)
+ self.ds.index.update_data(grid_data)
class FromListParticleGenerator(ParticleGenerator) :
- def __init__(self, pf, num_particles, data) :
+ def __init__(self, ds, num_particles, data) :
r"""
Generate particle fields from array-like lists contained in a dict.
Parameters
----------
- pf : `Dataset`
- The parameter file which will serve as the base for these particles.
+ ds : `Dataset`
+ The dataset which will serve as the base for these particles.
num_particles : int
The number of particles in the dict.
data : dict of NumPy arrays
@@ -222,7 +222,7 @@
>>> mass = np.ones((num_p))
>>> data = {'particle_position_x': posx, 'particle_position_y': posy,
>>> 'particle_position_z': posz, 'particle_mass': mass}
- >>> particles = FromListParticleGenerator(pf, num_p, data)
+ >>> particles = FromListParticleGenerator(ds, num_p, data)
"""
field_list = data.keys()
@@ -230,32 +230,32 @@
y = data.pop(("io", "particle_position_y"))
z = data.pop(("io", "particle_position_z"))
- xcond = np.logical_or(x < pf.domain_left_edge[0],
- x >= pf.domain_right_edge[0])
- ycond = np.logical_or(y < pf.domain_left_edge[1],
- y >= pf.domain_right_edge[1])
- zcond = np.logical_or(z < pf.domain_left_edge[2],
- z >= pf.domain_right_edge[2])
+ xcond = np.logical_or(x < ds.domain_left_edge[0],
+ x >= ds.domain_right_edge[0])
+ ycond = np.logical_or(y < ds.domain_left_edge[1],
+ y >= ds.domain_right_edge[1])
+ zcond = np.logical_or(z < ds.domain_left_edge[2],
+ z >= ds.domain_right_edge[2])
cond = np.logical_or(xcond, ycond)
cond = np.logical_or(zcond, cond)
if np.any(cond) :
raise ValueError("Some particles are outside of the domain!!!")
- ParticleGenerator.__init__(self, pf, num_particles, field_list)
+ ParticleGenerator.__init__(self, ds, num_particles, field_list)
self._setup_particles(x,y,z,setup_fields=data)
class LatticeParticleGenerator(ParticleGenerator) :
- def __init__(self, pf, particles_dims, particles_left_edge,
+ def __init__(self, ds, particles_dims, particles_left_edge,
particles_right_edge, field_list) :
r"""
Generate particles in a lattice arrangement.
Parameters
----------
- pf : `Dataset`
- The parameter file which will serve as the base for these particles.
+ ds : `Dataset`
+ The dataset which will serve as the base for these particles.
particles_dims : int, array-like
The number of particles along each dimension
particles_left_edge : float, array-like
@@ -273,7 +273,7 @@
>>> fields = ["particle_position_x","particle_position_y",
>>> "particle_position_z",
>>> "particle_density","particle_temperature"]
- >>> particles = LatticeParticleGenerator(pf, dims, le, re, fields)
+ >>> particles = LatticeParticleGenerator(ds, dims, le, re, fields)
"""
num_x = particles_dims[0]
@@ -285,8 +285,8 @@
xmax = particles_right_edge[0]
ymax = particles_right_edge[1]
zmax = particles_right_edge[2]
- DLE = pf.domain_left_edge.in_units("code_length").ndarray_view()
- DRE = pf.domain_right_edge.in_units("code_length").ndarray_view()
+ DLE = ds.domain_left_edge.in_units("code_length").ndarray_view()
+ DRE = ds.domain_right_edge.in_units("code_length").ndarray_view()
xcond = (xmin < DLE[0]) or (xmax >= DRE[0])
ycond = (ymin < DLE[1]) or (ymax >= DRE[1])
@@ -296,7 +296,7 @@
if cond :
raise ValueError("Proposed bounds for particles are outside domain!!!")
- ParticleGenerator.__init__(self, pf, num_x*num_y*num_z, field_list)
+ ParticleGenerator.__init__(self, ds, num_x*num_y*num_z, field_list)
dx = (xmax-xmin)/(num_x-1)
dy = (ymax-ymin)/(num_y-1)
@@ -310,15 +310,15 @@
class WithDensityParticleGenerator(ParticleGenerator) :
- def __init__(self, pf, data_source, num_particles, field_list,
+ def __init__(self, ds, data_source, num_particles, field_list,
density_field="density") :
r"""
Generate particles based on a density field.
Parameters
----------
- pf : `Dataset`
- The parameter file which will serve as the base for these particles.
+ ds : `Dataset`
+ The dataset which will serve as the base for these particles.
data_source : `yt.data_objects.api.AMRData`
The data source containing the density field.
num_particles : int
@@ -331,16 +331,16 @@
Examples
--------
- >>> sphere = pf.sphere(pf.domain_center, 0.5)
+ >>> sphere = ds.sphere(ds.domain_center, 0.5)
>>> num_p = 100000
>>> fields = ["particle_position_x","particle_position_y",
>>> "particle_position_z",
>>> "particle_density","particle_temperature"]
- >>> particles = WithDensityParticleGenerator(pf, sphere, num_particles,
+ >>> particles = WithDensityParticleGenerator(ds, sphere, num_particles,
>>> fields, density_field='Dark_Matter_Density')
"""
- ParticleGenerator.__init__(self, pf, num_particles, field_list)
+ ParticleGenerator.__init__(self, ds, num_particles, field_list)
num_cells = len(data_source["x"].flat)
max_mass = (data_source[density_field]*
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/tests/test_amr_kdtree.py
--- a/yt/utilities/tests/test_amr_kdtree.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/tests/test_amr_kdtree.py Sun Jun 15 19:50:51 2014 -0700
@@ -31,16 +31,16 @@
{"density": (0.25, 100.0)})]
rc = [fm.flagging_method_registry["overdensity"](8.0)]
ug = load_uniform_grid({"density": data}, domain_dims, 1.0)
- pf = refine_amr(ug, rc, fo, 5)
+ ds = refine_amr(ug, rc, fo, 5)
- kd = AMRKDTree(pf)
+ kd = AMRKDTree(ds)
volume = kd.count_volume()
yield assert_equal, volume, \
- np.prod(pf.domain_right_edge - pf.domain_left_edge)
+ np.prod(ds.domain_right_edge - ds.domain_left_edge)
cells = kd.count_cells()
- true_cells = pf.h.all_data().quantities['TotalQuantity']('Ones')[0]
+ true_cells = ds.all_data().quantities['TotalQuantity']('Ones')[0]
yield assert_equal, cells, true_cells
# This largely reproduces the AMRKDTree.tree.check_tree() functionality
@@ -48,7 +48,7 @@
for node in depth_traverse(kd.tree.trunk):
if node.grid is None:
continue
- grid = pf.index.grids[node.grid - kd._id_offset]
+ grid = ds.index.grids[node.grid - kd._id_offset]
dds = grid.dds
gle = grid.LeftEdge
nle = get_left_edge(node)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/tests/test_flagging_methods.py
--- a/yt/utilities/tests/test_flagging_methods.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/tests/test_flagging_methods.py Sun Jun 15 19:50:51 2014 -0700
@@ -2,11 +2,11 @@
from yt.utilities.flagging_methods import flagging_method_registry
def setup():
- global pf
- pf = fake_random_pf(64)
- pf.h
+ global ds
+ ds = fake_random_ds(64)
+ ds.index
def test_over_density():
od_flag = flagging_method_registry["overdensity"](0.75)
- criterion = (pf.index.grids[0]["density"] > 0.75)
- assert( np.all( od_flag(pf.index.grids[0]) == criterion) )
+ criterion = (ds.index.grids[0]["density"] > 0.75)
+ assert( np.all( od_flag(ds.index.grids[0]) == criterion) )
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/tests/test_particle_generator.py
--- a/yt/utilities/tests/test_particle_generator.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/tests/test_particle_generator.py Sun Jun 15 19:50:51 2014 -0700
@@ -21,7 +21,7 @@
ug = load_uniform_grid(fields, domain_dims, 1.0)
fo = [ic.BetaModelSphere(1.0,0.1,0.5,[0.5,0.5,0.5],{"density":(10.0)})]
rc = [fm.flagging_method_registry["overdensity"](4.0)]
- pf = refine_amr(ug, rc, fo, 3)
+ ds = refine_amr(ug, rc, fo, 3)
# Now generate particles from density
@@ -32,20 +32,20 @@
("io", "particle_gas_density")]
num_particles = 1000000
field_dict = {("gas", "density"): ("io", "particle_gas_density")}
- sphere = pf.sphere(pf.domain_center, 0.45)
+ sphere = ds.sphere(ds.domain_center, 0.45)
- particles1 = WithDensityParticleGenerator(pf, sphere, num_particles, field_list)
+ particles1 = WithDensityParticleGenerator(ds, sphere, num_particles, field_list)
particles1.assign_indices()
particles1.map_grid_fields_to_particles(field_dict)
# Test to make sure we ended up with the right number of particles per grid
particles1.apply_to_stream()
- particles_per_grid1 = [grid.NumberOfParticles for grid in pf.index.grids]
+ particles_per_grid1 = [grid.NumberOfParticles for grid in ds.index.grids]
yield assert_equal, particles_per_grid1, particles1.NumberOfParticles
- particles_per_grid1 = [len(grid["particle_position_x"]) for grid in pf.index.grids]
+ particles_per_grid1 = [len(grid["particle_position_x"]) for grid in ds.index.grids]
yield assert_equal, particles_per_grid1, particles1.NumberOfParticles
- tags = uconcatenate([grid["particle_index"] for grid in pf.index.grids])
+ tags = uconcatenate([grid["particle_index"] for grid in ds.index.grids])
assert(np.unique(tags).size == num_particles)
# Set up a lattice of particles
pdims = np.array([64,64,64])
@@ -58,7 +58,7 @@
new_field_dict = {("gas", "density"): ("io", "particle_gas_density"),
("gas", "temperature"): ("io", "particle_gas_temperature")}
- particles2 = LatticeParticleGenerator(pf, pdims, le, re, new_field_list)
+ particles2 = LatticeParticleGenerator(ds, pdims, le, re, new_field_list)
particles2.assign_indices(function=new_indices)
particles2.map_grid_fields_to_particles(new_field_dict)
@@ -77,39 +77,39 @@
#Test the number of particles again
particles2.apply_to_stream()
- particles_per_grid2 = [grid.NumberOfParticles for grid in pf.index.grids]
+ particles_per_grid2 = [grid.NumberOfParticles for grid in ds.index.grids]
yield assert_equal, particles_per_grid2, particles1.NumberOfParticles+particles2.NumberOfParticles
- [grid.field_data.clear() for grid in pf.index.grids]
- particles_per_grid2 = [len(grid["particle_position_x"]) for grid in pf.index.grids]
+ [grid.field_data.clear() for grid in ds.index.grids]
+ particles_per_grid2 = [len(grid["particle_position_x"]) for grid in ds.index.grids]
yield assert_equal, particles_per_grid2, particles1.NumberOfParticles+particles2.NumberOfParticles
#Test the uniqueness of tags
- tags = np.concatenate([grid["particle_index"] for grid in pf.index.grids])
+ tags = np.concatenate([grid["particle_index"] for grid in ds.index.grids])
tags.sort()
yield assert_equal, tags, np.arange((np.product(pdims)+num_particles))
# Test that the old particles have zero for the new field
old_particle_temps = [grid["particle_gas_temperature"][:particles_per_grid1[i]]
- for i, grid in enumerate(pf.index.grids)]
+ for i, grid in enumerate(ds.index.grids)]
test_zeros = [np.zeros((particles_per_grid1[i]))
- for i, grid in enumerate(pf.index.grids)]
+ for i, grid in enumerate(ds.index.grids)]
yield assert_equal, old_particle_temps, test_zeros
#Now dump all of these particle fields out into a dict
pdata = {}
- dd = pf.h.all_data()
+ dd = ds.all_data()
for field in new_field_list :
pdata[field] = dd[field]
#Test the "from-list" generator and particle field clobber
- particles3 = FromListParticleGenerator(pf, num_particles+np.product(pdims), pdata)
+ particles3 = FromListParticleGenerator(ds, num_particles+np.product(pdims), pdata)
particles3.apply_to_stream(clobber=True)
#Test the number of particles again
- particles_per_grid3 = [grid.NumberOfParticles for grid in pf.index.grids]
+ particles_per_grid3 = [grid.NumberOfParticles for grid in ds.index.grids]
yield assert_equal, particles_per_grid3, particles1.NumberOfParticles+particles2.NumberOfParticles
- particles_per_grid2 = [len(grid["particle_position_z"]) for grid in pf.index.grids]
+ particles_per_grid2 = [len(grid["particle_position_z"]) for grid in ds.index.grids]
yield assert_equal, particles_per_grid3, particles1.NumberOfParticles+particles2.NumberOfParticles
if __name__=="__main__":
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/tests/test_periodicity.py
--- a/yt/utilities/tests/test_periodicity.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/tests/test_periodicity.py Sun Jun 15 19:50:51 2014 -0700
@@ -29,16 +29,16 @@
# Now test the more complicated cases where we're calculaing radii based
# on data objects
- pf = fake_random_pf(64)
+ ds = fake_random_ds(64)
# First we test flattened data
- data = pf.h.all_data()
+ data = ds.all_data()
positions = np.array([data[ax] for ax in 'xyz'])
c = [0.1, 0.1, 0.1]
n_tup = tuple([1 for i in range(positions.ndim-1)])
center = np.tile(np.reshape(np.array(c), (positions.shape[0],)+n_tup),(1,)+positions.shape[1:])
- dist = periodic_dist(positions, center, period, pf.periodicity)
+ dist = periodic_dist(positions, center, period, ds.periodicity)
yield assert_almost_equal, dist.min(), 0.00270632938683
yield assert_almost_equal, dist.max(), 0.863319074398
@@ -47,13 +47,13 @@
yield assert_almost_equal, dist.max(), 1.54531407988
# Then grid-like data
- data = pf.index.grids[0]
+ data = ds.index.grids[0]
positions = np.array([data[ax] for ax in 'xyz'])
c = [0.1, 0.1, 0.1]
n_tup = tuple([1 for i in range(positions.ndim-1)])
center = np.tile(np.reshape(np.array(c), (positions.shape[0],)+n_tup),(1,)+positions.shape[1:])
- dist = periodic_dist(positions, center, period, pf.periodicity)
+ dist = periodic_dist(positions, center, period, ds.periodicity)
yield assert_almost_equal, dist.min(), 0.00270632938683
yield assert_almost_equal, dist.max(), 0.863319074398
diff -r f20d58ca2848 -r 67507b4f8da9 yt/utilities/tests/test_selectors.py
--- a/yt/utilities/tests/test_selectors.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/utilities/tests/test_selectors.py Sun Jun 15 19:50:51 2014 -0700
@@ -1,6 +1,6 @@
import numpy as np
from yt.testing import \
- fake_random_pf, assert_equal, assert_array_less, \
+ fake_random_ds, assert_equal, assert_array_less, \
YTArray
from yt.utilities.math_utils import periodic_dist
@@ -11,8 +11,8 @@
def test_sphere_selector():
# generate fake data with a number of non-cubical grids
- pf = fake_random_pf(64, nprocs=51)
- assert(all(pf.periodicity))
+ ds = fake_random_ds(64, nprocs=51)
+ assert(all(ds.periodicity))
# aligned tests
spheres = [ [0.0, 0.0, 0.0],
@@ -21,10 +21,10 @@
[0.25, 0.75, 0.25] ]
for center in spheres:
- data = pf.sphere(center, 0.25)
+ data = ds.sphere(center, 0.25)
# WARNING: this value has not be externally verified
- dd = pf.h.all_data()
- dd.set_field_parameter("center", pf.arr(center, 'code_length'))
+ dd = ds.all_data()
+ dd.set_field_parameter("center", ds.arr(center, 'code_length'))
n_outside = (dd["radius"] >= 0.25).sum()
assert_equal(data["radius"].size + n_outside, dd["radius"].size)
@@ -32,15 +32,15 @@
centers = np.tile(data.center, data['x'].shape[0]).reshape(
data['x'].shape[0], 3).transpose()
dist = periodic_dist(positions, centers,
- pf.domain_right_edge-pf.domain_left_edge,
- pf.periodicity)
+ ds.domain_right_edge-ds.domain_left_edge,
+ ds.periodicity)
# WARNING: this value has not been externally verified
yield assert_array_less, dist, 0.25
def test_ellipsoid_selector():
# generate fake data with a number of non-cubical grids
- pf = fake_random_pf(64, nprocs=51)
- assert(all(pf.periodicity))
+ ds = fake_random_ds(64, nprocs=51)
+ assert(all(ds.periodicity))
ellipsoids = [ [0.0, 0.0, 0.0],
[0.5, 0.5, 0.5],
@@ -50,12 +50,12 @@
# spherical ellipsoid tests
ratios = 3*[0.25]
for center in ellipsoids:
- data = pf.ellipsoid(center, ratios[0], ratios[1], ratios[2],
+ data = ds.ellipsoid(center, ratios[0], ratios[1], ratios[2],
np.array([1., 0., 0.]), 0.)
data.get_data()
- dd = pf.h.all_data()
- dd.set_field_parameter("center", pf.arr(center, "code_length"))
+ dd = ds.all_data()
+ dd.set_field_parameter("center", ds.arr(center, "code_length"))
n_outside = (dd["radius"] >= ratios[0]).sum()
assert_equal(data["radius"].size + n_outside, dd["radius"].size)
@@ -63,15 +63,15 @@
centers = np.tile(data.center,
data.shape[0]).reshape(data.shape[0], 3).transpose()
dist = periodic_dist(positions, centers,
- pf.domain_right_edge-pf.domain_left_edge,
- pf.periodicity)
+ ds.domain_right_edge-ds.domain_left_edge,
+ ds.periodicity)
# WARNING: this value has not been externally verified
yield assert_array_less, dist, ratios[0]
# aligned ellipsoid tests
ratios = [0.25, 0.1, 0.1]
for center in ellipsoids:
- data = pf.ellipsoid(center, ratios[0], ratios[1], ratios[2],
+ data = ds.ellipsoid(center, ratios[0], ratios[1], ratios[2],
np.array([1., 0., 0.]), 0.)
# hack to compute elliptic distance
@@ -82,19 +82,19 @@
centers = np.zeros((3,data["ones"].shape[0]))
centers[i,:] = center[i]
dist2 += (periodic_dist(positions, centers,
- pf.domain_right_edge-pf.domain_left_edge,
- pf.periodicity)/ratios[i])**2
+ ds.domain_right_edge-ds.domain_left_edge,
+ ds.periodicity)/ratios[i])**2
# WARNING: this value has not been externally verified
yield assert_array_less, dist2, 1.0
def test_slice_selector():
# generate fake data with a number of non-cubical grids
- pf = fake_random_pf(64, nprocs=51)
- assert(all(pf.periodicity))
+ ds = fake_random_ds(64, nprocs=51)
+ assert(all(ds.periodicity))
for i,d in enumerate('xyz'):
for coord in np.arange(0.0,1.0,0.1):
- data = pf.slice(i, coord)
+ data = ds.slice(i, coord)
data.get_data()
v = data[d].to_ndarray()
yield assert_equal, data.shape[0], 64**2
@@ -103,8 +103,8 @@
def test_cutting_plane_selector():
# generate fake data with a number of non-cubical grids
- pf = fake_random_pf(64, nprocs=51)
- assert(all(pf.periodicity))
+ ds = fake_random_ds(64, nprocs=51)
+ assert(all(ds.periodicity))
# test cutting plane against orthogonal plane
for i,d in enumerate('xyz'):
@@ -115,9 +115,9 @@
center = np.zeros(3)
center[i] = coord
- data = pf.slice(i, coord)
+ data = ds.slice(i, coord)
data.get_data()
- data2 = pf.cutting(norm, center)
+ data2 = ds.cutting(norm, center)
data2.get_data()
assert(data.shape[0] == data2.shape[0])
diff -r f20d58ca2848 -r 67507b4f8da9 yt/visualization/base_plot_types.py
--- a/yt/visualization/base_plot_types.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/visualization/base_plot_types.py Sun Jun 15 19:50:51 2014 -0700
@@ -29,11 +29,11 @@
if len(self._axes.images) > 0:
self.image = self._axes.images[0]
if frb.axis < 3:
- DD = frb.pf.domain_width
- xax = frb.pf.coordinates.x_axis[frb.axis]
- yax = frb.pf.coordinates.y_axis[frb.axis]
+ DD = frb.ds.domain_width
+ xax = frb.ds.coordinates.x_axis[frb.axis]
+ yax = frb.ds.coordinates.y_axis[frb.axis]
self._period = (DD[xax], DD[yax])
- self.pf = frb.pf
+ self.ds = frb.ds
self.xlim = viewer.xlim
self.ylim = viewer.ylim
if 'OffAxisSlice' in viewer._plot_type:
diff -r f20d58ca2848 -r 67507b4f8da9 yt/visualization/easy_plots.py
--- a/yt/visualization/easy_plots.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/visualization/easy_plots.py Sun Jun 15 19:50:51 2014 -0700
@@ -58,4 +58,4 @@
else: f = self.axes.plot
self.plot = f(self.profile[x_field], self.profile["CellMassMsun"],
**plot_args)
- self.axes.set_xlabel(data_source.pf.field_info[x_field].get_label())
+ self.axes.set_xlabel(data_source.ds.field_info[x_field].get_label())
diff -r f20d58ca2848 -r 67507b4f8da9 yt/visualization/eps_writer.py
--- a/yt/visualization/eps_writer.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/visualization/eps_writer.py Sun Jun 15 19:50:51 2014 -0700
@@ -286,12 +286,12 @@
data = plot._frb
width = plot.width[0]
if units == None:
- units = get_smallest_appropriate_unit(width, plot.pf)
- _xrange = (0, width * plot.pf[units])
- _yrange = (0, width * plot.pf[units])
+ units = get_smallest_appropriate_unit(width, plot.ds)
+ _xrange = (0, width * plot.ds[units])
+ _yrange = (0, width * plot.ds[units])
_xlog = False
_ylog = False
- axis_names = plot.pf.coordinates.axis_name
+ axis_names = plot.ds.coordinates.axis_name
if bare_axes:
_xlabel = ""
_ylabel = ""
@@ -301,7 +301,7 @@
_xlabel = xlabel
else:
if data.axis != 4:
- xax = plot.pf.coordinates.x_axis[data.axis]
+ xax = plot.ds.coordinates.x_axis[data.axis]
_xlabel = '%s (%s)' % (axis_names[xax], units)
else:
_xlabel = 'Image x (%s)' % (units)
@@ -309,7 +309,7 @@
_ylabel = ylabel
else:
if data.axis != 4:
- yax = plot.pf.coordinatesyx_axis[data.axis]
+ yax = plot.ds.coordinatesyx_axis[data.axis]
_ylabel = '%s (%s)' % (axis_names[yax], units)
else:
_ylabel = 'Image y (%s)' % (units)
@@ -652,7 +652,7 @@
if isinstance(plot, VMPlot):
proj = "Proj" in plot._type_name and \
plot.data._weight is None
- _zlabel = plot.pf.field_info[plot.axis_names["Z"]].get_label(proj)
+ _zlabel = plot.ds.field_info[plot.axis_names["Z"]].get_label(proj)
_zlabel = _zlabel.replace("_","\;")
_zlog = plot.log_field
_zrange = (plot.norm.vmin, plot.norm.vmax)
@@ -660,9 +660,9 @@
proj = plot._plot_type.endswith("Projection") and \
plot.data_source.weight_field == None
if isinstance(plot, PlotWindow):
- _zlabel = plot.pf.field_info[self.field].get_label(proj)
+ _zlabel = plot.ds.field_info[self.field].get_label(proj)
else:
- _zlabel = plot.data_source.pf.field_info[self.field].get_label(proj)
+ _zlabel = plot.data_source.ds.field_info[self.field].get_label(proj)
_zlabel = _zlabel.replace("_","\;")
_zlog = plot.get_log(self.field)[self.field]
if plot.plots[self.field].zmin == None:
@@ -1169,7 +1169,7 @@
Examples
--------
- >>> pc = PlotCollection(pf)
+ >>> pc = PlotCollection(ds)
>>> p = pc.add_slice('Density',0,use_colorbar=False)
>>> p.set_width(0.1,'kpc')
>>> p1 = pc.add_slice('Temperature',0,use_colorbar=False)
@@ -1281,8 +1281,8 @@
#=============================================================================
#if __name__ == "__main__":
-# pf = load('/Users/jwise/runs/16Jul09_Pop3/DD0019/output_0019')
-# pc = PlotCollection(pf)
+# ds = load('/Users/jwise/runs/16Jul09_Pop3/DD0019/output_0019')
+# pc = PlotCollection(ds)
# p = pc.add_slice('Density',0,use_colorbar=False)
# p.set_width(0.1,'kpc')
# p1 = pc.add_slice('Temperature',0,use_colorbar=False)
diff -r f20d58ca2848 -r 67507b4f8da9 yt/visualization/fixed_resolution.py
--- a/yt/visualization/fixed_resolution.py Sun Jun 15 07:46:08 2014 -0500
+++ b/yt/visualization/fixed_resolution.py Sun Jun 15 19:50:51 2014 -0700
@@ -70,7 +70,7 @@
To make a projection and then several images, you can generate a
single FRB and then access multiple fields:
- >>> proj = pf.proj(0, "Density")
+ >>> proj = ds.proj(0, "Density")
>>> frb1 = FixedResolutionBuffer(proj, (0.2, 0.3, 0.4, 0.5),
(1024, 1024))
>>> print frb1["Density"].max()
@@ -82,7 +82,7 @@
def __init__(self, data_source, bounds, buff_size, antialias = True,
periodic = False):
self.data_source = data_source
- self.pf = data_source.pf
+ self.ds = data_source.ds
self.bounds = bounds
self.buff_size = buff_size
self.antialias = antialias
@@ -90,18 +90,18 @@
self.axis = data_source.axis
self.periodic = periodic
- ds = getattr(data_source, "pf", None)
+ ds = getattr(data_source, "ds", None)
if ds is not None:
ds.plots.append(weakref.proxy(self))
# Handle periodicity, just in case
if self.data_source.axis < 3:
- DLE = self.pf.domain_left_edge
- DRE = self.pf.domain_right_edge
+ DLE = self.ds.domain_left_edge
+ DRE = self.ds.domain_right_edge
DD = float(self.periodic)*(DRE - DLE)
axis = self.data_source.axis
- xax = self.pf.coordinates.x_axis[axis]
- yax = self.pf.coordinates.y_axis[axis]
+ xax = self.ds.coordinates.x_axis[axis]
+ yax = self.ds.coordinates.y_axis[axis]
self._period = (DD[xax], DD[yax])
self._edges = ( (DLE[xax], DRE[xax]), (DLE[yax], DRE[yax]) )
@@ -120,7 +120,7 @@
if hasattr(b, "in_units"):
b = float(b.in_units("code_length"))
bounds.append(b)
- buff = self.pf.coordinates.pixelize(self.data_source.axis,
+ buff = self.ds.coordinates.pixelize(self.data_source.axis,
self.data_source, item, bounds, self.buff_size,
int(self.antialias))
# Need to add _period and self.periodic
@@ -138,7 +138,7 @@
fields = getattr(self.data_source, "fields", [])
fields += getattr(self.data_source, "field_data", {}).keys()
for f in fields:
- if f not in exclude and f[0] not in self.data_source.pf.particle_types:
+ if f not in exclude and f[0] not in self.data_source.ds.particle_types:
self[f]
@@ -179,13 +179,13 @@
def _get_info(self, item):
info = {}
ftype, fname = field = self.data_source._determine_fields(item)[0]
- finfo = self.data_source.pf._get_field_info(*field)
+ finfo = self.data_source.ds._get_field_info(*field)
info['data_source'] = self.data_source.__str__()
info['axis'] = self.data_source.axis
info['field'] = str(item)
info['xlim'] = self.bounds[:2]
info['ylim'] = self.bounds[2:]
- info['length_unit'] = self.data_source.pf.length_unit
+ info['length_unit'] = self.data_source.ds.length_unit
info['length_to_cm'] = info['length_unit'].in_cgs().to_ndarray()
info['center'] = self.data_source.center
@@ -330,10 +330,10 @@
@property
def limits(self):
rv = dict(x = None, y = None, z = None)
- xax = self.pf.coordinates.x_axis[self.axis]
- yax = self.pf.coordinates.y_axis[self.axis]
- xn = self.pf.coordinates.axis_name[xax]
- yn = self.pf.coordinates.axis_name[yax]
+ xax = self.ds.coordinates.x_axis[self.axis]
+ yax = self.ds.coordinates.y_axis[self.axis]
+ xn = self.ds.coordinates.axis_name[xax]
+ yn = self.ds.coordinates.axis_name[yax]
rv[xn] = (self.bounds[0], self.bounds[1])
rv[yn] = (self.bounds[2], self.bounds[3])
return rv
@@ -343,13 +343,13 @@
def __init__(self, data_source, radius, buff_size, antialias = True) :
self.data_source = data_source
- self.pf = data_source.pf
+ self.ds = data_source.ds
self.radius = radius
self.buff_size = buff_size
self.antialias = antialias
self.data = {}
- ds = getattr(data_source, "pf", None)
+ ds = getattr(data_source, "ds", None)
if ds is not None:
ds.plots.append(weakref.proxy(self))
@@ -399,10 +399,10 @@
mylog.info("Making a fixed resolutuion buffer of (%s) %d by %d" % \
(item, self.buff_size[0], self.buff_size[1]))
ds = self.data_source
- width = self.pf.arr((self.bounds[1] - self.bounds[0],
+ width = self.ds.arr((self.bounds[1] - self.bounds[0],
self.bounds[3] - self.bounds[2],
self.bounds[5] - self.bounds[4]))
- buff = off_axis_projection(ds.pf, ds.center, ds.normal_vector,
+ buff = off_axis_projection(ds.ds, ds.center, ds.normal_vector,
width, ds.resolution, item,
weight=ds.weight_field, volume=ds.volume,
no_ghost=ds.no_ghost, interpolated=ds.interpolate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment