MPI 3.1 Final Release candidate
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Index: chap-io/io-2.tex | |
=================================================================== | |
--- chap-io/io-2.tex (revision 2030) | |
+++ chap-io/io-2.tex (working copy) | |
@@ -2,6 +2,7 @@ | |
% Version of Wed Jun 18 20:15:00 PDT 1997 | |
\chapter{I/O} | |
+\mpitermtitleindex{IO@I/O} | |
\label{chap:io-2} | |
\label{sec:io-2} | |
@@ -55,20 +56,20 @@ | |
\label{subsec:io-2:definitions} | |
\begin{description} | |
-\item[file] | |
+\item[\mpitermdef{file}] | |
An \MPI/ file is an ordered collection of typed data items. | |
\MPI/ supports random or sequential access to any integral set of these items. | |
A file is opened collectively by a group of processes. | |
All collective I/O calls on a file are collective over this group. | |
-\item[displacement] | |
-A file \mpiterm{displacement} is an absolute byte position | |
+\item[\mpitermdef{displacement}] | |
+A file \mpitermni{displacement} is an absolute byte position | |
relative to the beginning of a file. | |
-The displacement defines the location where a \mpiterm{view} begins. | |
+The displacement defines the location where a \mpiterm{view}\mpitermindex{file!view} begins. | |
Note that a ``file displacement'' is distinct from a ``typemap displacement.'' | |
-\item[etype] | |
-An \mpiterm{etype} (\mpiterm{elementary} datatype) | |
+\item[\mpitermdef{etype}] | |
+An \mpitermni{etype} (\mpitermni{elementary} datatype\mpitermdefindex{elementary datatype}) | |
is the unit of data access and positioning. | |
It can be any \MPI/ predefined or derived datatype. | |
Derived etypes can be constructed | |
@@ -86,8 +87,8 @@ | |
a data item of that type, | |
or the extent of that type. | |
-\item[filetype] | |
-A \mpiterm{filetype} is the basis for partitioning a file among processes | |
+\item[\mpitermdefni{filetype}\mpitermdefindex{file!filetype}] | |
+A \mpitermni{filetype} is the basis for partitioning a file among processes | |
and defines a template for accessing the file. | |
A filetype is either a single etype or a derived \MPI/ datatype | |
constructed from multiple instances of the same etype. | |
@@ -97,8 +98,8 @@ | |
The displacements in the typemap of the filetype are not required to be distinct, | |
but they must be non-negative and monotonically nondecreasing. | |
-\item[view] | |
-A \mpiterm{view} defines the current set of data visible | |
+\item[\mpitermdef{view}\mpitermdefindex{file!view}] | |
+A \mpitermni{view} defines the current set of data visible | |
and accessible from an open file as an ordered set of etypes. | |
Each process has its own view of the file, | |
defined by three quantities: | |
@@ -132,8 +133,8 @@ | |
\label{fig:io-comp-filetypes} | |
\end{figure} | |
-\item[offset] | |
-An \mpiterm{offset} is a position | |
+\item[\mpitermdef{offset}\mpitermdefindex{file!offset}] | |
+An \mpitermni{offset} is a position | |
in the file | |
relative to the current view, | |
expressed as a count of etypes. | |
@@ -146,24 +147,24 @@ | |
An ``explicit offset'' is an offset that is used as an argument | |
in explicit data access routines. | |
-\item[file size and end of file] | |
-The \mpiterm{size} of an \MPI/ file is measured in bytes from the | |
+\item[\mpitermdefni{file size}\mpitermdefindex{file!size} and \mpitermdef{end of file}\mpitermdefindex{file!end of file}] | |
+The \mpitermni{size} of an \MPI/ file is measured in bytes from the | |
beginning of the file. A newly created file has a size of zero | |
bytes. Using the size as an absolute displacement gives | |
the position of the byte immediately following the last byte in | |
-the file. For any given view, the \mpiterm{end of file} is the | |
+the file. For any given view, the \mpitermni{end of file} is the | |
offset of the first etype accessible in the current view starting | |
after the last byte in the file. | |
-\item[file pointer] | |
-A \mpiterm{file pointer} is an implicit offset maintained by \MPI/. | |
+\item[\mpitermdefni{file pointer}\mpitermdefindex{file!pointer}] | |
+A \mpitermni{file pointer} is an implicit offset maintained by \MPI/. | |
``Individual file pointers'' are file pointers that are local to | |
each process that opened the file. | |
A ``shared file pointer'' is a file pointer that is shared by | |
the group of processes that opened the file. | |
-\item[file handle] | |
-A \mpiterm{file handle} is an opaque object created by \mpifunc{MPI\_FILE\_OPEN} | |
+\item[\mpitermdefni{file handle}\mpitermdefindex{file!handle}] | |
+A \mpitermni{file handle} is an opaque object created by \mpifunc{MPI\_FILE\_OPEN} | |
and freed by \mpifunc{MPI\_FILE\_CLOSE}. | |
All operations on an open file | |
reference the file through the file handle. | |
@@ -171,6 +172,7 @@ | |
\end{description} | |
\section{File Manipulation} | |
+\mpitermtitleindex{file!manipulation} | |
%========================== | |
\label{sec:io-filecntl} | |
@@ -646,6 +648,7 @@ | |
\end{example} | |
\subsection{File Info} | |
+\mpitermtitleindex{info object!file info} | |
%--------------------- | |
\label{sec:io-info} | |
@@ -867,6 +870,7 @@ | |
\end{description} | |
\section{File Views} | |
+\mpitermtitleindexmainsub{file}{view} | |
%=================== | |
\label{sec:io-view} | |
@@ -926,7 +930,7 @@ | |
\begin{rationale} | |
For some sequential files, | |
such as those corresponding to magnetic tapes or streaming network connections, | |
-the \emph{displacement} may not be meaningful. | |
+the \mpiterm{displacement} may not be meaningful. | |
\const{MPI\_DISPLACEMENT\_CURRENT} allows the view to be changed | |
for these types of files. | |
\end{rationale} | |
@@ -957,7 +961,7 @@ | |
\end{figure} | |
\end{users} | |
-An \mpiterm{etype} (\mpiterm{elementary} datatype) | |
+An \mpiterm{etype} (\mpitermni{elementary} datatype\mpitermindex{elementary datatype}) | |
is the unit of data access and positioning. | |
It can be any \MPI/ predefined or derived datatype. | |
Derived etypes can be constructed | |
@@ -971,7 +975,8 @@ | |
\begin{users} | |
In order to ensure interoperability in a heterogeneous environment, | |
-additional restrictions must be observed when constructing the \mpiarg{etype} | |
+additional restrictions must be observed when constructing the | |
+\mpishortarg{etype} | |
(see \sectionref{sec:io-file-interop}). | |
\end{users} | |
@@ -1057,6 +1062,7 @@ | |
committed state. | |
\section{Data Access} | |
+\mpitermtitleindex{file!data access} | |
%==================== | |
\label{sec:io-access} | |
@@ -1080,49 +1086,49 @@ | |
\tiny%%ALLOWLATEX% | |
\begin{tabular}{|l||l||l|l|} | |
\hline | |
-\textbf{positioning} & \textbf{synchronism} & \multicolumn{2}{c|}\textbf{coordination} \\ | |
+\textbf{positioning} & \textbf{synchronism} & \multicolumn{2}{c|}{\textbf{coordination}} \\ | |
\cline{3-4} | |
- & & \emph{noncollective} & \emph{collective} \\ | |
+ & & \emph{noncollective} & \mpiterm{collective} \\ | |
\hline | |
\hline %------------------------------------------------------------- | |
-\emph{explicit} & \emph{blocking} | |
+\emph{explicit} & \mpiterm{blocking} | |
& \mpifunc{MPI\_FILE\_READ\_AT} & \mpifunc{MPI\_FILE\_READ\_AT\_ALL} \\ | |
\emph{offsets} & | |
& \mpifunc{MPI\_FILE\_WRITE\_AT} & \mpifunc{MPI\_FILE\_WRITE\_AT\_ALL} \\ | |
\cline{2-4} | |
-& \emph{nonblocking} | |
+& \mpiterm{nonblocking} | |
& \mpifunc{MPI\_FILE\_IREAD\_AT} & \mpifunc{MPI\_FILE\_IREAD\_AT\_ALL} \\ | |
& & \mpifunc{MPI\_FILE\_IWRITE\_AT} & \mpifunc{MPI\_FILE\_IWRITE\_AT\_ALL} \\ | |
\cline{2-4} | |
-& \emph{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_AT\_ALL\_BEGIN} \\ | |
+& \mpiterm{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_AT\_ALL\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_READ\_AT\_ALL\_END} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_AT\_ALL\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_AT\_ALL\_END} \\ | |
\hline %------------------------------------------------------------- | |
-\emph{individual} & \emph{blocking} | |
+\emph{individual} & \mpiterm{blocking} | |
& \mpifunc{MPI\_FILE\_READ} & \mpifunc{MPI\_FILE\_READ\_ALL} \\ | |
\emph{file pointers} & | |
& \mpifunc{MPI\_FILE\_WRITE} & \mpifunc{MPI\_FILE\_WRITE\_ALL} \\ | |
\cline{2-4} | |
-& \emph{nonblocking} | |
+& \mpiterm{nonblocking} | |
& \mpifunc{MPI\_FILE\_IREAD} & \mpifunc{MPI\_FILE\_IREAD\_ALL} \\ | |
& & \mpifunc{MPI\_FILE\_IWRITE} & \mpifunc{MPI\_FILE\_IWRITE\_ALL} \\ | |
\cline{2-4} | |
-& \emph{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_ALL\_BEGIN} \\ | |
+& \mpiterm{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_ALL\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_READ\_ALL\_END} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_ALL\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_ALL\_END} \\ | |
\hline %------------------------------------------------------------- | |
-\emph{shared} & \emph{blocking} | |
+\emph{shared} & \mpiterm{blocking} | |
& \mpifunc{MPI\_FILE\_READ\_SHARED} & \mpifunc{MPI\_FILE\_READ\_ORDERED} \\ | |
\emph{file pointer} & | |
& \mpifunc{MPI\_FILE\_WRITE\_SHARED} & \mpifunc{MPI\_FILE\_WRITE\_ORDERED} \\ | |
\cline{2-4} | |
-& \emph{nonblocking} | |
+& \mpiterm{nonblocking} | |
& \mpifunc{MPI\_FILE\_IREAD\_SHARED} & {N/A} \\ | |
& & \mpifunc{MPI\_FILE\_IWRITE\_SHARED} & \\ | |
\cline{2-4} | |
-& \emph{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_ORDERED\_BEGIN} \\ | |
+& \mpiterm{split collective} & {N/A} & \mpifunc{MPI\_FILE\_READ\_ORDERED\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_READ\_ORDERED\_END} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_ORDERED\_BEGIN} \\ | |
& & & \mpifunc{MPI\_FILE\_WRITE\_ORDERED\_END} \\ | |
@@ -1147,7 +1153,7 @@ | |
%-------------------------- | |
\MPI/ provides three types of positioning for data access routines: | |
-explicit offsets, individual file pointers, and shared file pointers. | |
+\mpitermdef{explicit offsets}, \mpitermdef{individual file pointers}, and \mpitermdef{shared file pointers}. | |
The different positioning methods may be mixed within the same program | |
and do not affect each other. | |
@@ -1185,14 +1191,14 @@ | |
More formally, | |
\[ | |
- new\_file\_offset = old\_file\_offset + | |
+ \textit{new\_file\_offset} = \textit{old\_file\_offset} + | |
\frac{elements(datatype)}{elements(etype)} \times count | |
\] | |
where $count$ is the number of $datatype$ items to be accessed, | |
$elements(X)$ is the number of predefined datatypes in the typemap of $X$, | |
-and $old\_file\_offset$ is | |
+and \textit{old\_file\_offset} is | |
the value of the implicit offset before the call. | |
-The file position, $new\_file\_offset$, is in terms | |
+The file position, \textit{new\_file\_offset}, is in terms | |
of a count of etypes relative to the current view. | |
\subsubsection{Synchronism} | |
@@ -1200,14 +1206,14 @@ | |
\MPI/ supports blocking and nonblocking I/O routines. | |
-A \mpiterm{blocking} I/O call will | |
+A \mpitermni{blocking}\mpitermindex{blocking!I/O} I/O call will | |
not return | |
until the I/O request is completed. | |
-A \mpiterm{nonblocking} I/O call initiates an I/O operation, but does not | |
+A \mpitermni{nonblocking}\mpitermindex{nonblocking!I/O} I/O call initiates an I/O operation, but does not | |
wait for it to complete. Given suitable hardware, this allows the | |
transfer of data out of and into the user's buffer to proceed concurrently with | |
-computation. A separate \mpiterm{request complete} call | |
+computation. A separate \mpitermni{request complete}\mpitermindex{request complete!I/O} call | |
(\mpifunc{MPI\_WAIT}, \mpifunc{MPI\_TEST}, or any of their variants) is | |
needed to complete the I/O request, | |
i.e., to confirm that the data has been read or written and that | |
@@ -1353,6 +1359,8 @@ | |
unless an error is raised (or a read reaches the end of file). | |
\subsection{Data Access with Explicit Offsets} | |
+\mpitermtitleindex{explicit offsets} | |
+\mpitermtitleindex{file!data access!explicit offsets} | |
%--------------------------------------------- | |
\label{sec:io-explicit} | |
@@ -1526,6 +1534,8 @@ | |
\mpifunc{MPI\_FILE\_WRITE\_AT\_ALL}. | |
\subsection{Data Access with Individual File Pointers} | |
+\mpitermtitleindex{individual file pointers} | |
+\mpitermtitleindex{file!data access!individual file pointers} | |
%----------------------------------------------------- | |
\label{sec:io-indiv-ptr} | |
@@ -1868,6 +1878,8 @@ | |
is returned in \mpiarg{disp}. | |
\subsection{Data Access with Shared File Pointers} | |
+\mpitermtitleindex{shared file pointers} | |
+\mpitermtitleindex{file!data access!shared file pointers} | |
%------------------------------------------------- | |
\label{sec:io-shared-ptr} | |
@@ -1984,6 +1996,8 @@ | |
of the \mpifunc{MPI\_FILE\_WRITE\_SHARED} interface. | |
\subsubsection{Collective Operations} | |
+\mpitermtitleindex{collective communication!file data access operations} | |
+\mpitermtitleindex{file!data access!collective operations} | |
%-- - - - - - - - - - - - - - - - - - | |
\label{sec:io-shared-ptr-col} | |
@@ -2064,6 +2078,7 @@ | |
\mpifunc{MPI\_FILE\_WRITE\_SHARED} interface. | |
\subsubsection{Seek} | |
+\mpitermtitleindexmainsub{file!data access}{seek} | |
%-- - - - - - - - - | |
\label{sec:io-shared-ptr-seek} | |
@@ -2140,6 +2155,7 @@ | |
\end{users} | |
\subsection{Split Collective Data Access Routines} | |
+\mpitermtitleindexmainsub{file!data access}{split collective} | |
%----------------------------------------------------- | |
\label{sec:io-split-collective} | |
@@ -2421,6 +2437,7 @@ | |
\mpicppemptybind{MPI::File::Write\_ordered\_end(const~void*~buf)}{void} | |
\section{File Interoperability} | |
+\mpitermtitleindexmainsub{file}{interoperability} | |
%============================== | |
\label{sec:io-file-interop} | |
@@ -2512,6 +2529,7 @@ | |
\begin{description} | |
\item[``native'']\index{CONST:native} | |
+\mpitermdefindex{native -- file data representation}% | |
Data in this representation is stored in a file exactly | |
as it is in memory. | |
The advantage of this data representation is that | |
@@ -2534,6 +2552,7 @@ | |
\end{implementors} | |
\item[``internal'']\index{CONST:internal} | |
+\mpitermdefindex{internal -- file data representation}% | |
This data representation can be used for I/O operations in a homogeneous or | |
heterogeneous environment; the implementation will perform type | |
conversions if necessary. The implementation is free to store data in | |
@@ -2559,6 +2578,7 @@ | |
\end{implementors} | |
\item[``external32'']\index{CONST:external32} | |
+\mpitermdefindex{external32 -- file data representation}% | |
This data representation states that read and write operations | |
convert all data from | |
and to the ``external32'' | |
@@ -2841,6 +2861,7 @@ | |
\subsection{User-Defined Data Representations} | |
+\mpitermtitleindex{user-defined data representations} | |
%--------------------------------------------- | |
\label{sec:io-datarep} | |
@@ -3067,6 +3088,7 @@ | |
\mpiarg{position}. | |
An implementation will only invoke the callback routines in this section | |
+\flushline | |
(\mpiarg{read\_conversion\_fn}, \mpiarg{write\_conversion\_fn}, | |
and \mpiarg{dtype\_file\_extent\_fn}) | |
when one of the read or write routines in \sectionref{sec:io-access}, | |
@@ -3139,6 +3161,7 @@ | |
\end{users} | |
\section{Consistency and Semantics} | |
+\mpitermtitleindex{semantics!file consistency} | |
%================================== | |
\label{sec:io-semantics} | |
@@ -3464,6 +3487,7 @@ | |
semantics set forth in \sectionref{sec:nbcoll}. | |
\subsection{Type Matching} | |
+\mpitermtitleindex{matching!type} | |
%------------------------- | |
The type matching rules for I/O mimic the type matching rules | |
@@ -3545,12 +3569,13 @@ | |
when a file is created (see \sectionref{sec:io-info}). | |
\subsection{File Size} | |
+\mpitermtitleindex{file size} | |
%-------------------------- | |
\label{sec:io-consistency-filesize} | |
The size of a file may be increased by writing to the file after the | |
current end of file. The size may also be changed by calling | |
-\MPI/ \mpiterm{size changing} routines, | |
+\MPI/ \mpitermni{size changing}\mpitermindex{size changing!I/O} routines, | |
such as \mpifunc{MPI\_FILE\_SET\_SIZE}. A call to a size changing routine | |
does not necessarily change the file size. For example, calling | |
\mpifunc{MPI\_FILE\_PREALLOCATE} with a size less than the current size does | |
@@ -3559,7 +3584,7 @@ | |
Consider a set of bytes that has been written to a file since | |
the most recent call to a size changing routine, | |
or since \mpifunc{MPI\_FILE\_OPEN} if no such routine has been called. | |
-Let the \mpiterm{high byte} be the byte | |
+Let the \mpitermni{high byte} be the byte | |
in that set with the largest displacement. The file size | |
is the larger of | |
\begin{itemize} | |
@@ -3627,18 +3652,18 @@ | |
%%ENDHEADER | |
\begin{verbatim} | |
/* Process 0 */ | |
-int i, a[10] ; | |
+int i, a[10]; | |
int TRUE = 1; | |
for ( i=0;i<10;i++) | |
- a[i] = 5 ; | |
+ a[i] = 5; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ) ; | |
-MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_set_atomicity( fh0, TRUE ) ; | |
-MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status) ; | |
-/* MPI_Barrier( MPI_COMM_WORLD ) ; */ | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ); | |
+MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_set_atomicity( fh0, TRUE ); | |
+MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status); | |
+/* MPI_Barrier( MPI_COMM_WORLD ); */ | |
\end{verbatim} | |
%%HEADER | |
%%LANG: C | |
@@ -3647,14 +3672,14 @@ | |
%%ENDHEADER | |
\begin{verbatim} | |
/* Process 1 */ | |
-int b[10] ; | |
+int b[10]; | |
int TRUE = 1; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ) ; | |
-MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_set_atomicity( fh1, TRUE ) ; | |
-/* MPI_Barrier( MPI_COMM_WORLD ) ; */ | |
-MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status) ; | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ); | |
+MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_set_atomicity( fh1, TRUE ); | |
+/* MPI_Barrier( MPI_COMM_WORLD ); */ | |
+MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status); | |
\end{verbatim} | |
A user may guarantee that the write on process \constskip{0} | |
precedes the read on process \constskip{1} by imposing temporal order | |
@@ -3675,17 +3700,17 @@ | |
%%ENDHEADER | |
\begin{verbatim} | |
/* Process 0 */ | |
-int i, a[10] ; | |
+int i, a[10]; | |
for ( i=0;i<10;i++) | |
- a[i] = 5 ; | |
+ a[i] = 5; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ) ; | |
-MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status ) ; | |
-MPI_File_sync( fh0 ) ; | |
-MPI_Barrier( MPI_COMM_WORLD ) ; | |
-MPI_File_sync( fh0 ) ; | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ); | |
+MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status ); | |
+MPI_File_sync( fh0 ); | |
+MPI_Barrier( MPI_COMM_WORLD ); | |
+MPI_File_sync( fh0 ); | |
\end{verbatim} | |
%%HEADER | |
@@ -3695,14 +3720,14 @@ | |
%%ENDHEADER | |
\begin{verbatim} | |
/* Process 1 */ | |
-int b[10] ; | |
+int b[10]; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ) ; | |
-MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_sync( fh1 ) ; | |
-MPI_Barrier( MPI_COMM_WORLD ) ; | |
-MPI_File_sync( fh1 ) ; | |
-MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status ) ; | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ); | |
+MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_sync( fh1 ); | |
+MPI_Barrier( MPI_COMM_WORLD ); | |
+MPI_File_sync( fh1 ); | |
+MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status ); | |
\end{verbatim} | |
The ``sync-barrier-sync'' construct is required because: | |
\begin{itemize} | |
@@ -3726,16 +3751,16 @@ | |
\begin{verbatim} | |
/* ---------------- THIS EXAMPLE IS ERRONEOUS --------------- */ | |
/* Process 0 */ | |
-int i, a[10] ; | |
+int i, a[10]; | |
for ( i=0;i<10;i++) | |
- a[i] = 5 ; | |
+ a[i] = 5; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ) ; | |
-MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status ) ; | |
-MPI_File_sync( fh0 ) ; | |
-MPI_Barrier( MPI_COMM_WORLD ) ; | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh0 ); | |
+MPI_File_set_view( fh0, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_write_at(fh0, 0, a, 10, MPI_INT, &status ); | |
+MPI_File_sync( fh0 ); | |
+MPI_Barrier( MPI_COMM_WORLD ); | |
\end{verbatim} | |
%%HEADER | |
@@ -3745,13 +3770,13 @@ | |
%%ENDHEADER | |
\begin{verbatim} | |
/* Process 1 */ | |
-int b[10] ; | |
+int b[10]; | |
MPI_File_open( MPI_COMM_WORLD, "workfile", | |
- MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ) ; | |
-MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_Barrier( MPI_COMM_WORLD ) ; | |
-MPI_File_sync( fh1 ) ; | |
-MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status ) ; | |
+ MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &fh1 ); | |
+MPI_File_set_view( fh1, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_Barrier( MPI_COMM_WORLD ); | |
+MPI_File_sync( fh1 ); | |
+MPI_File_read_at(fh1, 0, b, 10, MPI_INT, &status ); | |
/* ---------------- THIS EXAMPLE IS ERRONEOUS --------------- */ | |
\end{verbatim} | |
@@ -3789,12 +3814,12 @@ | |
\begin{verbatim} | |
int a = 4, b, TRUE=1; | |
MPI_File_open( MPI_COMM_WORLD, "myfile", | |
- MPI_MODE_RDWR, MPI_INFO_NULL, &fh ) ; | |
-MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-/* MPI_File_set_atomicity( fh, TRUE ) ; Use this to set atomic mode. */ | |
-MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]) ; | |
-MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]) ; | |
-MPI_Waitall(2, reqs, statuses) ; | |
+ MPI_MODE_RDWR, MPI_INFO_NULL, &fh ); | |
+MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+/* MPI_File_set_atomicity( fh, TRUE ); Use this to set atomic mode. */ | |
+MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]); | |
+MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]); | |
+MPI_Waitall(2, reqs, statuses); | |
\end{verbatim} | |
For asynchronous data access operations, \MPI/ specifies | |
that the access occurs at any time between the call to the asynchronous | |
@@ -3818,13 +3843,13 @@ | |
\begin{verbatim} | |
int a = 4, b; | |
MPI_File_open( MPI_COMM_WORLD, "myfile", | |
- MPI_MODE_RDWR, MPI_INFO_NULL, &fh ) ; | |
-MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-/* MPI_File_set_atomicity( fh, TRUE ) ; Use this to set atomic mode. */ | |
-MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]) ; | |
-MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]) ; | |
-MPI_Wait(&reqs[0], &status) ; | |
-MPI_Wait(&reqs[1], &status) ; | |
+ MPI_MODE_RDWR, MPI_INFO_NULL, &fh ); | |
+MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+/* MPI_File_set_atomicity( fh, TRUE ); Use this to set atomic mode. */ | |
+MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]); | |
+MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]); | |
+MPI_Wait(&reqs[0], &status); | |
+MPI_Wait(&reqs[1], &status); | |
\end{verbatim} | |
If atomic mode is set, either \constskip{2} or \constskip{4} will be read | |
into \variable{b}. Again, \MPI/ does not guarantee sequential consistency | |
@@ -3839,12 +3864,12 @@ | |
\begin{verbatim} | |
int a = 4, b; | |
MPI_File_open( MPI_COMM_WORLD, "myfile", | |
- MPI_MODE_RDWR, MPI_INFO_NULL, &fh ) ; | |
-MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]) ; | |
-MPI_Wait(&reqs[0], &status) ; | |
-MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]) ; | |
-MPI_Wait(&reqs[1], &status) ; | |
+ MPI_MODE_RDWR, MPI_INFO_NULL, &fh ); | |
+MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_iwrite_at(fh, 10, &a, 1, MPI_INT, &reqs[0]); | |
+MPI_Wait(&reqs[0], &status); | |
+MPI_File_iread_at(fh, 10, &b, 1, MPI_INT, &reqs[1]); | |
+MPI_Wait(&reqs[1], &status); | |
\end{verbatim} | |
defines the same ordering as: | |
%%HEADER | |
@@ -3855,10 +3880,10 @@ | |
\begin{verbatim} | |
int a = 4, b; | |
MPI_File_open( MPI_COMM_WORLD, "myfile", | |
- MPI_MODE_RDWR, MPI_INFO_NULL, &fh ) ; | |
-MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ) ; | |
-MPI_File_write_at(fh, 10, &a, 1, MPI_INT, &status ) ; | |
-MPI_File_read_at(fh, 10, &b, 1, MPI_INT, &status ) ; | |
+ MPI_MODE_RDWR, MPI_INFO_NULL, &fh ); | |
+MPI_File_set_view( fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL ); | |
+MPI_File_write_at(fh, 10, &a, 1, MPI_INT, &status ); | |
+MPI_File_read_at(fh, 10, &b, 1, MPI_INT, &status ); | |
\end{verbatim} | |
Since | |
\begin{itemize} | |
@@ -3874,9 +3899,9 @@ | |
%%SKIP | |
%%ENDHEADER | |
\begin{verbatim} | |
-MPI_File_iwrite_all(fh,...) ; | |
-MPI_File_iread_all(fh,...) ; | |
-MPI_Waitall(...) ; | |
+MPI_File_iwrite_all(fh,...); | |
+MPI_File_iread_all(fh,...); | |
+MPI_Waitall(...); | |
\end{verbatim} | |
In addition, as mentioned in \sectionref{sec:io-semantics-nb-collective}, | |
@@ -3889,10 +3914,10 @@ | |
%%SKIP | |
%%ENDHEADER | |
\begin{verbatim} | |
-MPI_File_write_all_begin(fh,...) ; | |
-MPI_File_iread(fh,...) ; | |
-MPI_Wait(fh,...) ; | |
-MPI_File_write_all_end(fh,...) ; | |
+MPI_File_write_all_begin(fh,...); | |
+MPI_File_iread(fh,...); | |
+MPI_Wait(fh,...); | |
+MPI_File_write_all_end(fh,...); | |
\end{verbatim} | |
Recall that constraints governing consistency and semantics are not | |
@@ -3901,10 +3926,10 @@ | |
%%SKIP | |
%%ENDHEADER | |
\begin{verbatim} | |
-MPI_File_write_all_begin(fh,...) ; | |
-MPI_File_read_all_begin(fh,...) ; | |
-MPI_File_read_all_end(fh,...) ; | |
-MPI_File_write_all_end(fh,...) ; | |
+MPI_File_write_all_begin(fh,...); | |
+MPI_File_read_all_begin(fh,...); | |
+MPI_File_read_all_end(fh,...); | |
+MPI_File_write_all_end(fh,...); | |
\end{verbatim} | |
since split collective operations on the same file handle may not overlap | |
(see \sectionref{sec:io-split-collective}). | |
@@ -3914,6 +3939,7 @@ | |
\section{I/O Error Handling} | |
+\mpitermtitleindex{error handling!I/O} | |
%=========================== | |
\label{sec:io-errhandlers} | |
@@ -3975,6 +4001,7 @@ | |
\section{I/O Error Classes} | |
+\mpitermtitleindex{error handling!I/O} | |
%========================== | |
\label{sec:io-errors} | |
@@ -4100,11 +4127,11 @@ | |
/* buffer initialization */ | |
buffer1 = (float *) | |
- malloc(bufcount*sizeof(float)) ; | |
+ malloc(bufcount*sizeof(float)); | |
buffer2 = (float *) | |
- malloc(bufcount*sizeof(float)) ; | |
- compute_buf_ptr = buffer1 ; /* initially point to buffer1 */ | |
- write_buf_ptr = buffer1 ; /* initially point to buffer1 */ | |
+ malloc(bufcount*sizeof(float)); | |
+ compute_buf_ptr = buffer1; /* initially point to buffer1 */ | |
+ write_buf_ptr = buffer1; /* initially point to buffer1 */ | |
/* DOUBLE-BUFFER prolog: | |
@@ -4163,7 +4190,7 @@ | |
25--49, etc.; see Figure~\ref{fig:io-array-file}). | |
To create the filetypes for each process one could | |
use the following C program | |
-(see \section~\ref{sec:io-const-array}): | |
+(see Section~\ref{sec:io-const-array}): | |
\exindex{MPI\_TYPE\_CREATE\_SUBARRAY}% | |
%%HEADER | |
Index: mpi-sys-macs.tex | |
=================================================================== | |
--- mpi-sys-macs.tex (revision 2030) | |
+++ mpi-sys-macs.tex (working copy) | |
@@ -182,7 +182,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
% | |
% For correct usage of \_ together with pdflatex: | |
-% This macro enables that all "_" (underscore) characters in the pfd | |
+% This macro enables that all "_" (underscore) characters in the pdf | |
% file are searchable, and that cut&paste will copy the "_" as underscore. | |
% Without the following macro, the \_ is treated in searches and cut&paste | |
% as a " " (space character). | |
@@ -339,6 +339,7 @@ | |
\makeatother | |
\newcommand{\uu}[1]{\underline{\hyperpage{#1}}} | |
+\newcommand{\bold}[1]{\textbf{\hyperpage{#1}}} | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
% Theorems have \em text; we want \rm. The easiest way to fix this, | |
Index: indextomap.pl | |
=================================================================== | |
--- indextomap.pl (revision 2030) | |
+++ indextomap.pl (working copy) | |
@@ -28,10 +28,15 @@ | |
# Remove trailing spaces | |
$name =~ s/\s+//; | |
#print $pagetype . "\n"; | |
- if ($pagetype eq "hyperindexformat{\\uu}") { | |
+ # Only include MPI Function names in the index map file | |
+ # (including all other terms can cause problems for the name mapping | |
+ # program because some indexed names are too common) | |
+ if ($name =~ /^MPI/ && $pagetype eq "hyperindexformat{\\uu}") { | |
if (defined($nameToURL{$name})) { | |
if ($nameToURL{$name} != $page) { | |
print STDERR "Multiple primary definitions for $name\n"; | |
+ # Use only the first definition | |
+ next | |
} | |
} | |
$nameToURL{$name} = $page; | |
Index: chap-context/context.tex | |
=================================================================== | |
--- chap-context/context.tex (revision 2030) | |
+++ chap-context/context.tex (working copy) | |
@@ -1,4 +1,8 @@ | |
\chapter{Groups, Contexts, Communicators, and Caching} | |
+\mpitermtitleindex{group} | |
+\mpitermtitleindex{context} | |
+\mpitermtitleindex{communicator} | |
+\mpitermtitleindex{caching} | |
\label{sec:context} | |
\label{chap:context} | |
@@ -52,20 +56,20 @@ | |
The corresponding | |
concepts that \MPI/ provides, specifically to support robust libraries, are | |
as follows: | |
-\begin{itemize} \item \mpiterm{Contexts} of communication, | |
-\item \mpiterm{Groups} of processes, | |
-\item \mpiterm{Virtual topologies}, | |
-\item \mpiterm{Attribute caching}, | |
-\item \mpiterm{Communicators}. | |
+\begin{itemize} \item \mpitermdefni{Contexts}\mpitermdefindex{context} of communication, | |
+\item \mpitermdefni{Groups} of processes\mpitermdefindex{group}, | |
+\item \mpitermdefni{Virtual topologies}\mpitermdefindex{virtual topology}, | |
+\item \mpitermdefni{Attribute caching}\mpitermdefindex{attribute!caching}, | |
+\item \mpitermdefni{Communicators}\mpitermdefindex{communicator}. | |
\end{itemize} | |
-\mpiterm{Communicators} (see \cite{communicator,zipcode1,Skj93b}) encapsulate all of | |
+\mpitermdefni{Communicators} (see \cite{communicator,zipcode1,Skj93b}) encapsulate all of | |
these ideas in order to provide the appropriate scope for all communication | |
operations in \MPI/. Communicators are divided into two kinds: | |
intra-communicators for operations within a single group of processes and | |
inter-communicators for operations between two groups of | |
processes. | |
-\paragraph{Caching.} Communicators (see | |
+\paragraph{Caching.}\mpitermdefindex{caching} Communicators (see | |
below) provide a ``caching'' mechanism that allows one to | |
associate new attributes with communicators, on | |
par with \MPI/ built-in | |
@@ -74,7 +78,7 @@ | |
virtual-topology functions described in | |
Chapter~\ref{chap:topol} are likely to be supported this way. | |
-\paragraph{Groups.} Groups | |
+\paragraph{Groups.}\mpitermdefindex{group} Groups | |
define an ordered collection of processes, each with a rank, and it is this | |
group that defines the low-level names for inter-process communication (ranks | |
are used for sending and receiving). Thus, groups define a scope for process | |
@@ -83,14 +87,14 @@ | |
communicators in \MPI/, but only communicators can be used in | |
communication operations. | |
-\paragraph{Intra-communicators.} The most commonly used means for message | |
+\paragraph{Intra-communicators.}\mpitermdefindex{intra-communicator} The most commonly used means for message | |
passing in \MPI/ is via intra-communicators. Intra-communicators contain an | |
instance of a group, contexts of communication for both point-to-point and | |
collective communication, and the ability to include virtual topology and | |
other attributes. | |
These features work as follows: | |
\begin{itemize} | |
-\item \mpiterm{Contexts} provide the ability to have separate safe ``universes'' | |
+\item \mpitermdefni{Contexts}\mpitermdefindex{context} provide the ability to have separate safe ``universes'' | |
of message-passing in \MPI/. A context is akin to an additional | |
tag that differentiates messages. | |
The system manages this differentiation process. | |
@@ -104,16 +108,16 @@ | |
communications are also guaranteed not to interfere with collective | |
communications within a single communicator. | |
-\item \mpiterm{Groups} define the participants in the communication (see above) | |
+\item \mpitermdefni{Groups}\mpitermdefindex{group} define the participants in the communication (see above) | |
of a communicator. | |
-\item A \mpiterm{virtual topology} defines a special mapping of the ranks in a | |
+\item A \mpitermdef{virtual topology} defines a special mapping of the ranks in a | |
group to and from a topology. Special constructors for | |
communicators are defined in Chapter~\ref{chap:topol} to provide | |
this feature. Intra-communicators as described in this chapter do | |
not have topologies. | |
-\item \mpiterm{Attributes} define the local information that the user or | |
+\item \mpitermdefni{Attributes}\mpitermdefindex{attribute} define the local information that the user or | |
library has added to a communicator for later reference. | |
\end{itemize} | |
@@ -131,9 +135,9 @@ | |
\end{users} | |
\paragraph{Inter-communicators.} | |
-The discussion has dealt so far with \mpiterm{intra-communication}: | |
+The discussion has dealt so far with \mpitermdef{intra-communication}: | |
communication | |
-within a group. \MPI/ also supports \mpiterm{inter-communication}: | |
+within a group. \MPI/ also supports \mpitermdef{inter-communication}: | |
communication | |
between two non-overlapping groups. When an application is built by composing | |
several parallel modules, it is convenient to allow one module to communicate | |
@@ -144,7 +148,7 @@ | |
not all processes are preallocated at initialization time. In such a | |
situation, it becomes necessary to support communication across ``universes.'' | |
Inter-communication is supported by objects called | |
-\mpiterm{inter-communicators}. | |
+\mpitermdefni{inter-communicators}\mpitermdefindex{inter-communicator}. | |
These objects bind two groups together with communication contexts shared by | |
both groups. | |
For inter-communicators, these features work as follows: | |
@@ -191,11 +195,13 @@ | |
\subsection{Groups} | |
\label{sec:context:groups} | |
-A \mpiterm{group} is an ordered set of process identifiers (henceforth | |
-processes); processes are implementation-dependent objects. Each | |
-process in a group is associated with an integer \mpiterm{rank}. Ranks are | |
+A \mpitermdef{group} is an ordered set of process identifiers (henceforth | |
+processes); processes are | |
+implementation\hskip0pt-\hskip0pt\relax{}dependent | |
+objects. Each | |
+process in a group is associated with an integer \mpitermdef{rank}. Ranks are | |
contiguous and start from zero. | |
-Groups are represented by opaque \mpiterm{group objects}, and hence cannot | |
+Groups are represented by opaque \mpitermdef{group objects}, and hence cannot | |
be directly transferred from one process to another. A group is used | |
within a communicator to describe the participants in a communication | |
``universe'' and to rank such participants (thus giving them unique names | |
@@ -228,7 +234,7 @@ | |
\subsection{Contexts} | |
\label{sec:context:contexts} | |
-A \mpiterm{context} is a property of communicators (defined next) that allows | |
+A \mpitermdef{context} is a property of communicators (defined next) that allows | |
partitioning of the communication space. A message sent in one context cannot | |
be received in another context. Furthermore, where permitted, collective | |
operations are independent of pending point-to-point operations. | |
@@ -281,7 +287,7 @@ | |
communication, and provides machine-independent process addressing through | |
ranks. | |
-Intra-communicators are represented by opaque \mpiterm{intra-communicator | |
+Intra-communicators are represented by opaque \mpitermdef{intra-communicator | |
objects}, and hence cannot be directly transferred from one process to | |
another. | |
@@ -827,8 +833,8 @@ | |
called the \emph{left} and \emph{right} groups. A process in an | |
intercommunicator is a member of either the left or the right group. From the | |
point of view of that process, the | |
-group that the process is a member of is called the \emph{local} group; the | |
-other group (relative to that process) is the \emph{remote} group. | |
+group that the process is a member of is called the \mpiterm{local group}; the | |
+other group (relative to that process) is the \mpiterm{remote group}. | |
The left and right group labels give us a way to describe the two groups in | |
an intercommunicator that is not relative to any particular process (as the | |
local and remote groups are). | |
@@ -1619,15 +1625,15 @@ | |
{ | |
int me, count, count2; | |
void *send_buf, *recv_buf, *send_buf2, *recv_buf2; | |
- MPI_Group MPI_GROUP_WORLD, grprem; | |
+ MPI_Group group_world, grprem; | |
MPI_Comm commslave; | |
static int ranks[] = {0}; | |
... | |
MPI_Init(&argc, &argv); | |
- MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); | |
+ MPI_Comm_group(MPI_COMM_WORLD, &group_world); | |
MPI_Comm_rank(MPI_COMM_WORLD, &me); /* local */ | |
- MPI_Group_excl(MPI_GROUP_WORLD, 1, ranks, &grprem); /* local */ | |
+ MPI_Group_excl(group_world, 1, ranks, &grprem); /* local */ | |
MPI_Comm_create(MPI_COMM_WORLD, grprem, &commslave); | |
if(me != 0) | |
@@ -1642,7 +1648,7 @@ | |
MPI_Reduce(send_buf2, recv_buf2, count2, | |
MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); | |
- MPI_Group_free(&MPI_GROUP_WORLD); | |
+ MPI_Group_free(&group_world); | |
MPI_Group_free(&grprem); | |
MPI_Finalize(); | |
return 0; | |
@@ -1686,14 +1692,14 @@ | |
int me; | |
MPI_Request request[2]; | |
MPI_Status status[2]; | |
- MPI_Group MPI_GROUP_WORLD, subgroup; | |
+ MPI_Group group_world, subgroup; | |
int ranks[] = {2, 4, 6, 8}; | |
MPI_Comm the_comm; | |
... | |
MPI_Init(&argc, &argv); | |
- MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); | |
+ MPI_Comm_group(MPI_COMM_WORLD, &group_world); | |
- MPI_Group_incl(MPI_GROUP_WORLD, 4, ranks, &subgroup); /* local */ | |
+ MPI_Group_incl(group_world, 4, ranks, &subgroup); /* local */ | |
MPI_Group_rank(subgroup, &me); /* local */ | |
MPI_Comm_create(MPI_COMM_WORLD, subgroup, &the_comm); | |
@@ -1711,7 +1717,7 @@ | |
MPI_Comm_free(&the_comm); | |
} | |
- MPI_Group_free(&MPI_GROUP_WORLD); | |
+ MPI_Group_free(&group_world); | |
MPI_Group_free(&subgroup); | |
MPI_Finalize(); | |
return 0; | |
@@ -1846,7 +1852,7 @@ | |
int main(int argc, char *argv[]) | |
{ | |
int ma, mb; | |
- MPI_Group MPI_GROUP_WORLD, group_a, group_b; | |
+ MPI_Group group_world, group_a, group_b; | |
MPI_Comm comm_a, comm_b; | |
static int list_a[] = {0, 1}; | |
@@ -1860,10 +1866,10 @@ | |
... | |
MPI_Init(&argc, &argv); | |
- MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); | |
+ MPI_Comm_group(MPI_COMM_WORLD, &group_world); | |
- MPI_Group_incl(MPI_GROUP_WORLD, size_list_a, list_a, &group_a); | |
- MPI_Group_incl(MPI_GROUP_WORLD, size_list_b, list_b, &group_b); | |
+ MPI_Group_incl(group_world, size_list_a, list_a, &group_a); | |
+ MPI_Group_incl(group_world, size_list_b, list_b, &group_b); | |
MPI_Comm_create(MPI_COMM_WORLD, group_a, &comm_a); | |
MPI_Comm_create(MPI_COMM_WORLD, group_b, &comm_b); | |
@@ -1888,7 +1894,7 @@ | |
MPI_Comm_free(&comm_b); | |
MPI_Group_free(&group_a); | |
MPI_Group_free(&group_b); | |
- MPI_Group_free(&MPI_GROUP_WORLD); | |
+ MPI_Group_free(&group_world); | |
MPI_Finalize(); | |
return 0; | |
} | |
@@ -1970,8 +1976,8 @@ | |
All communication described thus far has involved | |
communication between processes that are members of the same group. This type | |
-of communication is called ``int\-ra-com\-mun\-i\-cat\-ion'' and the | |
-communicator used is called an ``intra-communicator,'' as we have noted | |
+of communication is called ``\mpitermdefni{int\-ra-com\-mun\-i\-cat\-ion}\mpitermdefindex{intra-communication}'' and the | |
+communicator used is called an ``\mpitermdef{intra-communicator},'' as we have noted | |
earlier in the chapter. | |
In modular and multi-disciplinary applications, different process groups | |
@@ -1984,10 +1990,10 @@ | |
process group that uses the services of one or more servers. It is again most | |
natural to specify the target process by rank within the target group in these | |
applications. This type of communication is called | |
-``int\-er-com\-mun\-i\-cat\-ion'' and the communicator used is called an | |
-``inter-communicator,'' as introduced earlier. | |
+``\mpitermdefni{int\-er-com\-mun\-i\-cat\-ion}'' and the communicator used is called an | |
+``\mpitermdef{inter-communicator},'' as introduced earlier. | |
-An int\-er-com\-mun\-i\-cat\-ion is a point-to-point communication | |
+An \mpitermdef{inter-communication} is a point-to-point communication | |
between processes in different groups. The group containing a process that | |
initiates an int\-er-com\-mun\-i\-cat\-ion operation is called the ``local | |
group,'' that is, the sender in a send and the receiver in a receive. The | |
@@ -2055,7 +2061,7 @@ | |
\mpiterm{source} is the rank of the process in the local group. | |
For intra-communicators, \mpiterm{group} is the communicator group | |
(remote=local), \mpiterm{source} is the rank of the process in this group, | |
-and \mpiterm{send context} and \mpiterm{receive context} are identical. | |
+and \mpitermni{send context}\mpitermindex{send!context} and \mpitermni{receive context}\mpitermindex{receive!context} are identical. | |
A group | |
can be | |
represented by a rank-to-absolute-address translation table. | |
@@ -2088,7 +2094,7 @@ | |
Assume that \textbf{Q} posts a receive with an explicit source argument | |
using the inter-communicator. Then \textbf{Q} matches | |
-\mpiterm{receive\_context} to the message context and source argument to the | |
+\textbf{receive\_context} to the message context and source argument to the | |
message source. | |
The same algorithm is appropriate for intra-communicators as well. | |
@@ -2469,10 +2475,11 @@ | |
\end{verbatim} | |
\section{Caching} | |
+\mpitermtitleindex{caching} | |
\label{sec:caching} | |
\MPI/ provides a ``caching'' facility that allows an application to | |
-attach arbitrary pieces of information, called \mpiterm{attributes}, to | |
+attach arbitrary pieces of information, called \mpitermdefni{attributes}\mpitermdefindex{attribute}, to | |
three kinds of \MPI/ objects, communicators, | |
windows, and datatypes. | |
More precisely, the caching | |
@@ -3422,6 +3429,7 @@ | |
\section{Naming Objects} | |
+\mpitermtitleindex{naming objects} | |
\label{sec:ei-naming} | |
There are many occasions on which it would be useful to allow a user | |
@@ -3638,6 +3646,7 @@ | |
\section{Formalizing the Loosely Synchronous Model} | |
+\mpitermtitleindex{loosely synchronous model} | |
\label{sec:formalizing} | |
In this section, we make further statements about the loosely | |
synchronous model, with particular attention to intra-communication. | |
@@ -3666,13 +3675,13 @@ | |
\label{sec:context:models-of-execution} | |
In the loosely synchronous model, transfer of control to a | |
-\mpiterm{parallel procedure} is effected by having each executing process | |
+\mpitermdef{parallel procedure} is effected by having each executing process | |
invoke the procedure. The invocation is a collective operation: it | |
is executed by all processes in the execution group, and invocations | |
are similarly ordered at all processes. However, the invocation need | |
not be synchronized. | |
-We say that a parallel procedure is \emph{active} in a process if the process | |
+We say that a parallel procedure is \mpiterm{active} in a process if the process | |
belongs to a group that may collectively execute the procedure, and | |
some member of that group is currently executing the procedure code. | |
If a parallel procedure is active in a process, then this process may | |
Index: instr.tex | |
=================================================================== | |
--- instr.tex (revision 2030) | |
+++ instr.tex (working copy) | |
@@ -107,6 +107,7 @@ | |
For compatibility with the widest variety of editors, text should be | |
wrapped to fit with 80 columns. Edits should avoid reflowing text as | |
this complicates identifying real changes in the document. | |
+The document follows the conventions and spelling of American English. | |
\subsection{Basic Formatting} | |
@@ -132,9 +133,9 @@ | |
addition, the use of the page reference is often misleading, as the | |
page number will refer to the beginning of the section but the typical | |
use of these is to point to the entire body of the section, which | |
-almost certainly spans multiple pages. | |
-See | |
-Section~\ref{sec:not-to-do} for some examples. | |
+almost certainly spans multiple pages, or to a specific page within | |
+the section, but not necessarily the first page of the section. | |
+See Section~\ref{sec:not-to-do} for some examples. | |
LaTeX defines many environments and many others may be added to | |
LaTeX. To preserve a uniform appearance, use only these environments | |
@@ -354,6 +355,14 @@ | |
consistent style is used in the document. | |
Do not use $\ldots$ to for this purpose. | |
+\subsection{\texorpdfstring{\MPI/}{MPI} Terms} | |
+\label{sec:terms} | |
+The \MPI/ document introduces a number of terms, such as ``message'' | |
+and ``send buffer.'' | |
+These should be marked as \verb+\mpitermdef{message}+ where the term | |
+is first used and defined, and as \verb+\mpiterm{send buffer}+ at | |
+subsequent uses. These macros will generate an index entry for each use. | |
+ | |
\subsection{Standard Names} | |
\label{sec:standard-names} | |
@@ -937,9 +946,20 @@ | |
It is incorrect to use an en dash as punctuation, and it is incorrect | |
to use a hyphen in a number range. | |
+\subsection{Using Quotes}\label{sec:using-quotes} | |
+TeX uses the characters \verb+`+ and \verb+'+ for open and close | |
+quotes respectively. | |
+For double quotes, use two of the approproate quote; do \emph{not} use | |
+the double quote character \verb+"+. | |
+ | |
+Because this document uses the standards of American English, | |
+punctuation after a quoted phrase is placed within the quotation. | |
+For example, ``terma,'' ``termb,'' and ``termc.'' | |
+ | |
+ | |
\subsection{And so on} | |
The above are not the only things to avoid --- the recommendation is | |
-to stick to the commands outline in this document and to contact the | |
-document master/editor if something else is needed. | |
+to stick to the commands outlined in this document and to contact the | |
+document master or editor if something else is needed. | |
\end{document} | |
Index: chap-coll/coll.tex | |
=================================================================== | |
--- chap-coll/coll.tex (revision 2030) | |
+++ chap-coll/coll.tex (working copy) | |
@@ -1,4 +1,6 @@ | |
\chapter{Collective Communication} | |
+\mpitermtitleindex{communication!collective} | |
+\mpitermtitleindex{collective communication} | |
\label{sec:coll} | |
\label{chap:coll} | |
\label{chap:collective-2} | |
@@ -211,13 +213,15 @@ | |
Instead, there is a communicator argument. | |
Groups and communicators are discussed in full detail in Chapter~\ref{chap:context}. | |
For the purposes of this chapter, it is sufficient to know that there | |
-are two types of communicators: \emph{intra-communicators} and \emph{inter-communicators}. | |
+are two types of communicators: \mpitermni{intra-communicators}\mpitermindex{intra-communicator} | |
+and \mpitermni{inter-communicators}\mpitermindex{inter-communicator}. | |
An intracommunicator can be thought of as an identifier for a single group of processes | |
linked with a context. An intercommunicator identifies two distinct groups of processes | |
linked with a context. | |
\subsection{Specifics for Intracommunicator Collective Operations} | |
+\mpitermtitleindex{intra-communicator!collective operations} | |
All processes in the group identified by the intracommunicator must call | |
the collective routine. | |
@@ -252,6 +256,7 @@ | |
\end{users} | |
\subsection{Applying Collective Operations to Intercommunicators} | |
+\mpitermtitleindex{inter-communicator!collective operations} | |
\label{sec:collective-2} | |
\label{sec:MPI-coll} | |
@@ -357,6 +362,7 @@ | |
\end{figure} | |
\subsection{Specifics for Intercommunicator Collective Operations} | |
+\mpitermtitleindex{inter-communicator!collective operations} | |
All processes in both groups identified by the intercommunicator must call | |
the collective routine. | |
@@ -392,6 +398,7 @@ | |
\end{rationale} | |
\section{Barrier Synchronization} | |
+\mpitermtitleindex{barrier synchronization} | |
\label{sec:coll-barrier} | |
\begin{funcdef}{MPI\_BARRIER(comm)} | |
@@ -418,6 +425,7 @@ | |
have entered the call. | |
\section{Broadcast} | |
+\mpitermtitleindex{broadcast} | |
\label{sec:coll-broadcast} | |
\begin{funcdef}{MPI\_BCAST(buffer, count, datatype, root, comm)} | |
@@ -503,6 +511,7 @@ | |
\end{example} | |
\section{Gather} | |
+\mpitermtitleindex{gather} | |
\label{sec:coll-gather} | |
\begin{funcdef}{MPI\_GATHER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -542,12 +551,13 @@ | |
\begin{mpicodeblock} | |
MPI\_Send(sendbuf, sendcount, sendtype, root , ...), | |
\end{mpicodeblock} | |
-and the | |
+\noindent and the | |
root had executed \mpicode{n} calls to | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf+i$\cdot$ recvcount$\cdot$ extent(recvtype), recvcount, recvtype, i,...), | |
\end{mpicodeblock} | |
-where \mpicode{extent(recvtype)} is the type extent obtained from a call to | |
+\noindent where | |
+\mpicode{extent(recvtype)} is the type extent obtained from a call to | |
\mpicode{MPI\_Type\_get\_extent}. | |
An alternative description is that the \mpicode{n} messages sent by the | |
@@ -648,7 +658,7 @@ | |
\begin{mpicodeblock} | |
MPI\_Send(sendbuf, sendcount, sendtype, root, ...), | |
\end{mpicodeblock} | |
-and the root executes \mpicode{n} receives, | |
+\noindent and the root executes \mpicode{n} receives, | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf+displs[j]$\cdot$ extent(recvtype), recvcounts[j], | |
recvtype, i, ...). | |
@@ -1118,6 +1128,7 @@ | |
\end{example} | |
\section{Scatter} | |
+\mpitermtitleindex{scatter} | |
\label{sec:coll-scatter} | |
\begin{funcdef}{MPI\_SCATTER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -1154,7 +1165,7 @@ | |
MPI\_Send(sendbuf+i$\cdot$ sendcount$\cdot$ extent(sendtype), sendcount, | |
sendtype, i,...), | |
\end{mpicodeblock} | |
-and each process executed a receive, | |
+\noindent and each process executed a receive, | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf, recvcount, recvtype, i,...). | |
\end{mpicodeblock} | |
@@ -1265,7 +1276,7 @@ | |
MPI\_Send(sendbuf+displs[i]$\cdot$ extent(sendtype), sendcounts[i], | |
sendtype, i,...), | |
\end{mpicodeblock} | |
-and each process executed a receive, | |
+\noindent and each process executed a receive, | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf, recvcount, recvtype, i,...). | |
\end{mpicodeblock} | |
@@ -1456,6 +1467,7 @@ | |
\end{figure} | |
\section{Gather-to-all} | |
+\mpitermtitleindex{gather-to-all} | |
\label{sec:coll-allcast} | |
\begin{funcdef}{MPI\_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -1645,6 +1657,7 @@ | |
\end{example} | |
\section{All-to-All Scatter/Gather} | |
+\mpitermtitleindex{all-to-all} | |
\label{sec:coll-alltoall} | |
\begin{funcdef}{MPI\_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -1694,10 +1707,10 @@ | |
MPI\_Send(sendbuf+i$\cdot$ sendcount$\cdot$ | |
extent(sendtype),sendcount,sendtype,i, ...), | |
\end{mpicodeblock} | |
-and a receive from every other process | |
+\noindent and a receive from every other process | |
with a call to, | |
\begin{mpicodeblock} | |
-MPI\_Recv(recvbuf+i$\cdot$ recvcount$\cdot$ extent(recvtype),recvcount,recvtype,i,...). | |
+MPI\_Recv(recvbuf+i $\cdot$ recvcount $\cdot$ extent(recvtype),recvcount,recvtype,i,...). | |
\end{mpicodeblock} | |
All arguments | |
@@ -1797,7 +1810,7 @@ | |
\begin{mpicodeblock} | |
MPI\_Send(sendbuf+sdispls[i]$\cdot$ extent(sendtype),sendcounts[i],sendtype,i,...), | |
\end{mpicodeblock} | |
-and received a message from every other process with | |
+\noindent and received a message from every other process with | |
a call to | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf+rdispls[i]$\cdot$ extent(recvtype),recvcounts[i],recvtype,i,...). | |
@@ -1906,7 +1919,7 @@ | |
\begin{mpicodeblock} | |
MPI\_Send(sendbuf+sdispls[i],sendcounts[i],sendtypes[i] ,i,...), | |
\end{mpicodeblock} | |
-and received a message from every other process with a call to | |
+\noindent and received a message from every other process with a call to | |
\begin{mpicodeblock} | |
MPI\_Recv(recvbuf+rdispls[i],recvcounts[i],recvtypes[i] ,i,...). | |
\end{mpicodeblock} | |
@@ -1938,6 +1951,7 @@ | |
\end{rationale} | |
\section{Global Reduction Operations} | |
+\mpitermtitleindex{reduction operations} | |
\label{global-reduce} | |
The functions in this section perform a global reduce operation | |
@@ -1959,6 +1973,7 @@ | |
functionality of a reduce and of a scatter operation. | |
\subsection{Reduce} | |
+\mpitermtitleindex{reduce} | |
\label{subsec:coll-reduce} | |
\begin{funcdef}{MPI\_REDUCE(sendbuf, recvbuf, count, datatype, op, | |
@@ -2094,6 +2109,7 @@ | |
buffer arguments are significant at the root. | |
\subsection{Predefined Reduction Operations} | |
+\mpitermtitleindexsubmain{predefined}{reduction operations} | |
\label{coll-predefined-op} | |
The following predefined operations are supplied for \mpifunc{MPI\_REDUCE} | |
@@ -2585,6 +2601,7 @@ | |
\end{rationale} | |
\subsection{User-Defined Reduction Operations} | |
+\mpitermtitleindexsubmain{user-defined}{reduction operations} | |
\label{subsec:coll-user-ops} | |
\begin{funcdef}{MPI\_OP\_CREATE(user\_fn, commute, op)} | |
@@ -2612,7 +2629,7 @@ | |
commutative and associative. If \mpiarg{commute} $=$ \mpicode{false}, | |
then the order of operands is fixed and is defined to be in ascending, process | |
rank order, beginning with process zero. The order of evaluation can be | |
-changed, talking advantage of the associativity of the operation. If | |
+changed, taking advantage of the associativity of the operation. If | |
\mpiarg{commute} $=$ \mpicode{true} then the order of evaluation can be changed, | |
taking advantage of commutativity and associativity. | |
@@ -2637,7 +2654,7 @@ | |
to \mpifunc{MPI\_REDUCE}. | |
The user reduce function should be written such that the following | |
holds: | |
-Let \mpicode{u[0], $\ldots$, u[len-1]} be the \mpiarg{len} elements in the | |
+Let \mpicode{u[0], $\ldots$ , u[len-1]} be the \mpiarg{len} elements in the | |
communication buffer described by the arguments \mpiarg{invec, len} | |
and \mpiarg{datatype} when the function is invoked; | |
let \mpicode{v[0], $\ldots$ , v[len-1]} be \mpiarg{len} elements in the | |
@@ -2652,7 +2669,7 @@ | |
Informally, we can think of | |
\mpiarg{invec} and \mpiarg{inoutvec} as arrays of \mpiarg{len} elements that | |
\mpiarg{user\_fn} | |
-is combining. The result of the reduction over-writes values in | |
+is combining. The result of the reduction overwrites values in | |
\mpiarg{inoutvec}, hence the name. Each invocation of the function results in | |
the pointwise evaluation of the reduce operator on \mpiarg{len} | |
elements: | |
@@ -2883,6 +2900,7 @@ | |
\end{example} | |
\subsection{All-Reduce} | |
+\mpitermtitleindex{all-reduce} | |
\label{subsec:coll-all-reduce} | |
\MPI/ includes | |
@@ -2979,6 +2997,7 @@ | |
\end{example} | |
\subsection{Process-Local Reduction} | |
+\mpitermtitleindex{reduction operations!process-local} | |
\label{subsec:coll-process-local-reduction} % Sect. 5.9.7 p.173 NEWsection | |
The functions in this section are of importance to library implementors | |
@@ -3028,6 +3047,7 @@ | |
\section{Reduce-Scatter} | |
+\mpitermtitleindex{reduce-scatter} | |
\label{sec:coll-reduce-scatter} | |
\MPI/ includes variants of the reduce operations where the result is scattered | |
@@ -3182,8 +3202,10 @@ | |
\end{rationale} | |
\section{Scan} | |
+\mpitermtitleindexmainsub{reduction operations}{scan} | |
\label{sec:coll-scan} | |
\subsection{Inclusive Scan} | |
+\mpitermtitleindexsubmain{inclusive}{scan} | |
\begin{funcdef}{MPI\_SCAN(sendbuf, recvbuf, count, datatype, op, comm)} | |
\funcarg{\IN}{sendbuf}{starting address of send buffer (choice)} | |
@@ -3224,6 +3246,7 @@ | |
This operation is invalid for intercommunicators. | |
\subsection{Exclusive Scan} | |
+\mpitermtitleindexsubmain{exclusive}{scan} | |
\label{subsec:coll-exscan} | |
\label{coll-exscan} % Sect. 5.11.2 p.175 newlabel | |
@@ -3395,6 +3418,7 @@ | |
\section{Nonblocking Collective Operations} | |
+\mpitermtitleindex{collective communication!nonblocking} | |
\label{sec:nbcoll} | |
As described in Section~\ref{sec:pt2pt-nonblock}, performance of many | |
applications can be improved by overlapping communication and | |
@@ -3542,6 +3566,7 @@ | |
\subsection{Nonblocking Barrier Synchronization} | |
+\mpitermtitleindex{barrier synchronization!nonblocking} | |
\label{sec:nbcoll-ibarrier} | |
\begin{funcdef}{MPI\_IBARRIER(comm , request)} | |
@@ -3575,6 +3600,7 @@ | |
\subsection{Nonblocking Broadcast} | |
+\mpitermtitleindex{broadcast!nonblocking} | |
\label{sec:nbcoll-ibroadcast} | |
\begin{funcdef}{MPI\_IBCAST(buffer, count, datatype, root, comm, request)} | |
@@ -3629,6 +3655,7 @@ | |
\subsection{Nonblocking Gather} | |
+\mpitermtitleindex{gather!nonblocking} | |
\label{sec:nbcoll-igather} | |
\begin{funcdef2}{MPI\_IGATHER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -3694,6 +3721,7 @@ | |
Section~\ref{sec:coll-gather}). | |
\subsection{Nonblocking Scatter} | |
+\mpitermtitleindex{scatter!nonblocking} | |
\label{sec:nbcoll-iscatter} | |
\begin{funcdef2}{MPI\_ISCATTER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -3756,6 +3784,7 @@ | |
\subsection{Nonblocking Gather-to-all} | |
+\mpitermtitleindex{gather-to-all!nonblocking} | |
\label{sec:nbcoll-iallcast} | |
\begin{funcdef2}{MPI\_IALLGATHER(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -3815,6 +3844,7 @@ | |
\subsection{Nonblocking All-to-All Scatter/Gather} | |
+\mpitermtitleindex{all-to-all!nonblocking} | |
\label{sec:nbcoll-ialltoall} | |
\begin{funcdef}{MPI\_IALLTOALL(sendbuf, sendcount, sendtype, recvbuf, | |
@@ -3912,6 +3942,7 @@ | |
Section~\ref{sec:coll-alltoall}). | |
\subsection{Nonblocking Reduce} | |
+\mpitermtitleindex{reduce!nonblocking} | |
\label{subsec:nbcoll-ireduce} | |
\begin{funcdef}{MPI\_IREDUCE(sendbuf, recvbuf, count, datatype, op, | |
@@ -3961,6 +3992,7 @@ | |
\subsection{Nonblocking All-Reduce} | |
+\mpitermtitleindex{all-reduce!nonblocking} | |
\label{subsec:nbcoll-all-reduce} | |
\begin{funcdef}{MPI\_IALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm, request)} | |
@@ -3990,6 +4022,7 @@ | |
\subsection{Nonblocking Reduce-Scatter with Equal Blocks} | |
+\mpitermtitleindex{reduce-scatter!nonblocking} | |
\label{sec:nbcoll-reduce-scatter-block} | |
\begin{funcdef}{MPI\_IREDUCE\_SCATTER\_BLOCK(sendbuf, recvbuf, recvcount, | |
@@ -4015,6 +4048,7 @@ | |
\subsection{Nonblocking Reduce-Scatter} | |
+\mpitermtitleindex{reduce-scatter!nonblocking} | |
\label{sec:nbcoll-reduce-scatter} | |
\begin{funcdef}{MPI\_IREDUCE\_SCATTER(sendbuf, recvbuf, recvcounts, | |
@@ -4044,6 +4078,7 @@ | |
\subsection{Nonblocking Inclusive Scan} | |
+\mpitermtitleindex{inclusive scan!nonblocking} | |
\label{subsec:nbcoll-iscan} | |
\begin{funcdef}{MPI\_ISCAN(sendbuf, recvbuf, count, datatype, op, comm, request)} | |
@@ -4068,6 +4103,7 @@ | |
Section~\ref{sec:coll-scan}). | |
\subsection{Nonblocking Exclusive Scan} | |
+\mpitermtitleindex{exclusive scan!nonblocking} | |
\label{subsec:nbcoll-iexscan} | |
@@ -4093,6 +4129,7 @@ | |
Section~\ref{subsec:coll-exscan}). | |
\section{Correctness} | |
+\mpitermtitleindex{collective communication!correctness} | |
\label{coll:correct} | |
A correct, portable program must invoke collective communications so | |
Index: chap-terms/terms-2.tex | |
=================================================================== | |
--- chap-terms/terms-2.tex (revision 2030) | |
+++ chap-terms/terms-2.tex (working copy) | |
@@ -60,10 +60,11 @@ | |
\item | |
The names of certain actions have been standardized. In | |
-particular, \mpiterm{Create} creates a new object, \mpiterm{Get} | |
-retrieves information about an object, \mpiterm{Set} sets | |
-this information, \mpiterm{Delete} deletes information, | |
-\mpiterm{Is} asks whether or not an object has a certain property. | |
+particular, \mpitermdefni{Create}\mpitermindex{create -- in function names} | |
+creates a new object, \mpitermdefni{Get}\mpitermdefindex{get -- in function names} | |
+retrieves information about an object, \mpitermdefni{set}\mpitermdefindex{Set -- in function names} sets | |
+this information, \mpitermdefni{Delete}\mpitermdefindex{delete -- in function names} deletes information, | |
+\mpitermdefni{Is}\mpitermdefindex{is -- in function names} asks whether or not an object has a certain property. | |
\end{enumerate} | |
@@ -71,8 +72,8 @@ | |
some \MPI/ functions (that were defined during the \MPII/ process) | |
violate these rules | |
in several cases. The most common exceptions are the omission | |
-of the \mpiterm{Class} name from the routine and the omission of | |
-the \mpiterm{Action} where one can be inferred. | |
+of the \mpitermdefni{Class}\mpitermdefindex{class -- in function names} name from the routine and the omission of | |
+the \mpitermdefni{Action}\mpitermdefindex{action -- in function names} where one can be inferred. | |
\mpi/ identifiers are limited to 30 characters (31 with the profiling | |
interface). This is done to avoid exceeding the limit on some | |
@@ -189,38 +190,38 @@ | |
terms are used. | |
\begin{description} | |
-\item[\mpiterm{nonblocking}] A procedure is nonblocking if it may return before the associated | |
+\item[\mpitermdef{nonblocking}] A procedure is nonblocking if it may return before the associated | |
operation completes, and before the user is allowed to reuse | |
resources (such as buffers) specified in the call. | |
The word complete is used with respect to operations and any associated requests and/or | |
-communications. An \mpiterm{operation completes} when the user is allowed | |
+communications. An \mpitermdef{operation completes}\mpitermdefindex{completes -- operation} when the user is allowed | |
to reuse resources, and any output buffers have been updated. | |
-\item[\mpiterm{blocking}] A procedure is blocking if return from the procedure indicates the user | |
+\item[\mpitermdef{blocking}] A procedure is blocking if return from the procedure indicates the user | |
is allowed to reuse resources specified in the call. | |
-\item[\mpiterm{local}] | |
+\item[\mpitermdef{local}] | |
A procedure is local if completion of the procedure depends only on the | |
local executing process. | |
-\item[\mpiterm{non-local}] | |
+\item[\mpitermdef{non-local}] | |
A procedure is non-local if completion of the operation may require | |
the execution of some \MPI/ procedure on another process. Such an | |
operation may require | |
communication occurring with another user process. | |
-\item[\mpiterm{collective}] | |
+\item[\mpitermdef{collective}] | |
A procedure is collective if all processes in a process group need to invoke the procedure. A | |
collective call may or may not be synchronizing. | |
Collective calls over the same communicator | |
must be executed in the same order by all members of the process | |
group. | |
-\item[\mpiterm{predefined}] | |
+\item[\mpitermdefni{predefined}\mpitermdefindex{predefined datatype}] | |
A predefined datatype is a datatype with a predefined (constant) name | |
(such as \consti{MPI\_INT}, \consti{MPI\_FLOAT\_INT}, or \consti{MPI\_PACKED}) | |
or a datatype constructed with \mpifunc{MPI\_TYPE\_CREATE\_F90\_INTEGER}, | |
\mpifunc{MPI\_TYPE\_CREATE\_F90\_REAL}, or | |
-\mpifunc{MPI\_TYPE\_CREATE\_F90\_COMPLEX}. The former are \mpiterm{named} | |
-whereas the latter are \mpiterm{unnamed}. | |
-\item[\mpiterm{derived}] | |
+\mpifunc{MPI\_TYPE\_CREATE\_F90\_COMPLEX}. The former are \mpitermdefni{named}\mpitermdefindex{named datatype} | |
+whereas the latter are \mpitermdefni{unnamed}\mpitermdefindex{unnamed datatype}. | |
+\item[\mpitermdefni{derived}\mpitermdefindex{derived datatype}] | |
A derived datatype is any datatype that is not predefined. | |
-\item[\mpiterm{portable}] | |
+\item[\mpitermdefni{portable}\mpitermdefindex{portable datatype}] | |
A datatype is portable if it is a predefined datatype, or it is derived | |
from a portable datatype using only the type constructors | |
\mpifunc{MPI\_TYPE\_CONTIGUOUS}, \mpifunc{MPI\_TYPE\_VECTOR}, | |
@@ -242,7 +243,7 @@ | |
These displacements are unlikely to be chosen correctly if they fit | |
data layout on one memory, but are used for data layouts on another | |
process, running on a processor with a different architecture. | |
-\item[\mpiterm{equivalent}] | |
+\item[\mpitermdefni{equivalent}\mpitermdefindex{equivalent datatypes}] | |
Two datatypes are equivalent if they appear to have been created with | |
the same sequence of calls (and arguments) and thus have the same | |
typemap. Two equivalent datatypes do not necessarily have the same | |
@@ -254,14 +255,15 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Opaque Objects} | |
+\mpitermtitleindex{opaque objects} | |
\label{terms:opaque-objects} | |
-\MPI/ manages \mpiterm{system memory} that is used for buffering | |
+\MPI/ manages \mpitermdef{system memory}\mpitermdefindex{memory!system} that is used for buffering | |
messages and for storing internal representations of various \MPI/ objects | |
such as groups, communicators, datatypes, etc. | |
This memory is not directly accessible to the user, and objects stored | |
-there are \mpiterm{opaque}: their size and shape is not visible to the | |
-user. Opaque objects are accessed via \mpiterm{handles}, which exist in | |
+there are \mpitermdefni{opaque}: their size and shape is not visible to the | |
+user. Opaque objects are accessed via \mpitermdef{handles}, which exist in | |
user space. \MPI/ procedures that operate on opaque objects are | |
passed handle arguments to access these objects. | |
In addition to their use by \MPI/ calls for object access, handles can | |
@@ -413,6 +415,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Array Arguments} | |
+\mpitermtitleindex{array arguments} | |
\label{subsec:array-arguments} | |
An \MPI/ call may need an argument that is an array of opaque objects, | |
@@ -431,6 +434,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{State} | |
+\mpitermtitleindex{state} | |
\MPI/ procedures use at various places arguments with \emph{state} types. The | |
values of such a data type are all identified by names, and no operation is | |
@@ -441,6 +445,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Named Constants} | |
+\mpitermtitleindex{constants} | |
\label{subsec:named-constants} | |
\MPI/ procedures sometimes assign a special meaning to a special value of a | |
@@ -500,23 +505,19 @@ | |
\end{obeylines} | |
The constants that cannot be used in initialization expressions or | |
-assignments in Fortran are: | |
+assignments in Fortran are as follows: | |
\begin{obeylines} | |
-\tt%%ALLOWLATEX% | |
- MPI\_BOTTOM | |
- MPI\_STATUS\_IGNORE | |
- MPI\_STATUSES\_IGNORE | |
- MPI\_ERRCODES\_IGNORE | |
- MPI\_IN\_PLACE | |
- MPI\_ARGV\_NULL | |
- MPI\_ARGVS\_NULL | |
- MPI\_UNWEIGHTED | |
- MPI\_WEIGHTS\_EMPTY | |
+\sf\small%%ALLOWLATEX% | |
+ MPI\_BOTTOM\cdeclindex{MPI\_BOTTOM} | |
+ MPI\_STATUS\_IGNORE\cdeclindex{MPI\_STATUS\_IGNORE} | |
+ MPI\_STATUSES\_IGNORE\cdeclindex{MPI\_STATUSES\_IGNORE} | |
+ MPI\_ERRCODES\_IGNORE\cdeclindex{MPI\_ERRCODES\_IGNORE} | |
+ MPI\_IN\_PLACE\cdeclindex{MPI\_IN\_PLACE} | |
+ MPI\_ARGV\_NULL\cdeclindex{MPI\_ARGV\_NULL} | |
+ MPI\_ARGVS\_NULL\cdeclindex{MPI\_ARGVS\_NULL} | |
+ MPI\_UNWEIGHTED\cdeclindex{MPI\_UNWEIGHTED} | |
+ MPI\_WEIGHTS\_EMPTY\cdeclindex{MPI\_WEIGHTS\_EMPTY} | |
\end{obeylines} | |
-\cdeclindex{MPI\_BOTTOM}\cdeclindex{MPI\_STATUS\_IGNORE}% | |
-\cdeclindex{MPI\_STATUSES\_IGNORE}\cdeclindex{MPI\_ERRCODES\_IGNORE}% | |
-\cdeclindex{MPI\_IN\_PLACE}\cdeclindex{MPI\_ARGV\_NULL}% | |
-\cdeclindex{MPI\_ARGVS\_NULL}\cdeclindex{MPI\_UNWEIGHTED}% | |
\begin{implementors} | |
In Fortran the implementation of these special constants may require the | |
@@ -535,6 +536,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Choice} | |
+\mpitermtitleindex{choice} | |
\label{sub:choice} | |
\MPI/ functions sometimes use arguments with a \emph{choice} (or union) data | |
@@ -559,11 +561,15 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Absolute Addresses and Relative Address Displacements} | |
+\mpitermtitleindexsubmain{absolute}{addresses} | |
+\mpitermtitleindex{relative displacement} | |
+\mpitermtitleindex{addresses!relative displacement} | |
\label{subsec:displacement} | |
-Some \MPI/ procedures use \emph{address} arguments that represent an absolute | |
-address in the calling program, | |
-or relative displacement arguments that represent differences of two absolute addresses. | |
+Some \MPI/ procedures use \mpitermni{address} arguments that represent an | |
+\mpitermni{absolute address} in the calling program, | |
+or \mpitermni{relative displacement} | |
+arguments that represent differences of two absolute addresses. | |
The datatype of such arguments | |
\cdeclmainindex{MPI\_Aint}% | |
is \type{MPI\_Aint} in C and \ftype{INTEGER | |
@@ -583,6 +589,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{File Offsets} | |
+\mpitermtitleindexmainsub{file}{offset} | |
For I/O there is a need to give the size, displacement, and offset | |
into a file. These quantities can easily be larger than 32 bits which | |
@@ -599,6 +606,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Counts} | |
+\mpitermtitleindex{counts} | |
\label{subsec:count} | |
As described above, \MPI/ defines types (e.g., \type{MPI\_Aint}) to | |
@@ -630,6 +638,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\section{Language Binding} | |
+\mpitermtitleindex{language binding} | |
\label{subsec:lang} | |
\label{subsec:binding} | |
@@ -662,6 +671,8 @@ | |
where any of the letters are either upper or lower case. | |
\subsection{Deprecated and Removed Names and Functions} | |
+\mpitermtitleindex{deprecated names and functions} | |
+\mpitermtitleindex{removed names and functions} | |
\label{sec:deprecated} | |
A number of chapters refer to deprecated or replaced \MPI/ constructs. | |
These are | |
@@ -769,6 +780,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Fortran Binding Issues} | |
+\mpitermtitleindex{Fortran -- language binding} | |
\label{sec:fortran-binding-issues} | |
Originally, | |
@@ -786,9 +798,7 @@ | |
capitals. Programs must not declare names, e.g., for | |
variables, subroutines, | |
functions, parameters, derived types, abstract interfaces, or modules, | |
-beginning with the prefix \code{MPI\_}, | |
-with the exception of \code{MPI\_} routines written by the user | |
-to make use of the profiling interface. | |
+beginning with the prefix \code{MPI\_}. | |
To avoid | |
conflicting with the profiling interface, programs must also avoid subroutines and functions with the prefix \code{PMPI\_}. | |
This is mandated to avoid possible name collisions. | |
@@ -827,6 +837,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{C Binding Issues} | |
+\mpitermtitleindex{C -- language binding} | |
\label{sec:c-binding-issues} | |
We use the ISO C | |
@@ -838,8 +849,6 @@ | |
beginning with | |
any prefix of the form \code{MPI\_}, | |
where any of the letters are either upper or lower case. | |
-An exception are \code{MPI\_} routines written by the user to | |
-make use of the profiling interface. | |
To support the profiling interface, programs must not declare | |
functions with names beginning with any prefix of the form \code{PMPI\_}, | |
where any of the letters are either upper or lower case. | |
@@ -863,6 +872,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Functions and Macros} | |
+\mpitermtitleindex{macros} | |
\label{sec:macros} | |
An implementation is allowed to implement | |
@@ -885,6 +895,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\section{Processes} | |
+\mpitermtitleindex{processes} | |
An \MPI/ program consists of autonomous processes, executing their own | |
code, in | |
@@ -920,6 +931,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\section{Error Handling} | |
+\mpitermtitleindex{error handling} | |
\MPI/ provides the user with reliable message transmission. | |
A message sent is always received | |
@@ -936,12 +948,12 @@ | |
Similarly, \MPI/ itself provides no mechanisms for | |
handling processor failures. | |
-Of course, \MPI/ programs may still be erroneous. A \mpiterm{program error} can | |
+Of course, \MPI/ programs may still be erroneous. A \mpitermdef{program error}\mpitermdefindex{error handling!program error} can | |
occur when an \MPI/ call is made with an incorrect argument (non-existing | |
destination in a send operation, buffer too small in a receive | |
operation, etc.). | |
This type of error would occur in any implementation. | |
-In addition, a \mpiterm{resource error} may occur when a program exceeds the amount | |
+In addition, a \mpitermdef{resource error}\mpitermdefindex{error handling!resource error} may occur when a program exceeds the amount | |
of available system resources (number of pending messages, system buffers, | |
etc.). The occurrence of this type of error depends on the amount of | |
available resources in the system and the resource allocation mechanism used; | |
@@ -1072,6 +1084,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\subsection{Interaction with Signals} | |
+\mpitermtitleindex{signals} | |
\MPI/ does not specify the interaction of processes with signals and does | |
not require that \MPI/ be signal safe. The | |
Index: chap-intro/intro.tex | |
=================================================================== | |
--- chap-intro/intro.tex (revision 2030) | |
+++ chap-intro/intro.tex (working copy) | |
@@ -290,10 +290,11 @@ | |
\section{Background of \texorpdfstring{\MPIIIIDOTI/}{MPI-3.1}} | |
\MPIIIIDOTI/ is a minor update to the \MPI/ standard. Most of the updates | |
are corrections and clarifications to the standard, | |
-especially for the Fortran binding. New functions added | |
-include routines to portably manipulate \code{MPI\_Aint} values, nonblocking | |
+especially for the Fortran bindings. New functions added | |
+include routines to manipulate \code{MPI\_Aint} values in a portable manner, nonblocking | |
collective I/O routines, and routines to get the index value by name for | |
-the \mpiskipfunc{MPI\_T} performance and control variables. | |
+\mpiskipfunc{MPI\_T} performance and control variables. | |
+A general index was also added. | |
\section{Who Should Use This Standard?} | |
Index: chap-topol/topol.tex | |
=================================================================== | |
--- chap-topol/topol.tex (revision 2030) | |
+++ chap-topol/topol.tex (working copy) | |
@@ -1,4 +1,5 @@ | |
\chapter{Process Topologies} | |
+\mpitermtitleindex{topologies} | |
\label{sec:topol} | |
\label{chap:topol} | |
@@ -58,6 +59,7 @@ | |
\end{rationale} | |
\section{Virtual Topologies} | |
+\mpitermtitleindexsubmain{virtual}{topology} | |
The communication pattern of a set of processes can be represented by a | |
graph. The nodes | |
represent processes, | |
@@ -110,8 +112,11 @@ | |
\section{Overview of the Functions} | |
\label{subsec:topol-overview} | |
-MPI supports three topology types: Cartesian, | |
-graph, and distributed graph. The function \mpifunc{MPI\_CART\_CREATE} | |
+MPI supports three topology types: | |
+\mpitermdefni{Cartesian}\mpitermdefindex{Cartesian -- topology}\mpitermdefindex{topology!Cartesian}, | |
+\mpitermdefni{graph}\mpitermdefindex{graph -- topology}\mpitermdefindex{topology!graph}, and | |
+\mpitermdefni{distributed graph}\mpitermdefindex{distributed graph -- topology}\mpitermdefindex{topology!distributed graph}. | |
+The function \mpifunc{MPI\_CART\_CREATE} | |
is used to create Cartesian topologies, the function | |
\mpifunc{MPI\_GRAPH\_CREATE} is used to create graph topologies, and the | |
functions \mpifunc{MPI\_DIST\_GRAPH\_CREATE\_ADJACENT} and | |
@@ -206,6 +211,8 @@ | |
\label{subsec:topol-construct} | |
\subsection{Cartesian Constructor} | |
+\mpitermtitleindex{Cartesian -- topology} | |
+\mpitermtitleindex{topology!Cartesian} | |
\label{subsec:topol-cartesian-constructor} | |
\begin{funcdef}{MPI\_CART\_CREATE(comm\_old, ndims, dims, periods, reorder, comm\_cart)} | |
@@ -294,6 +301,8 @@ | |
\end{example} | |
\subsection{Graph Constructor} | |
+\mpitermtitleindex{graph -- topology} | |
+\mpitermtitleindex{topology!graph} | |
\label{subsec:topol-graph-constructor} | |
\begin{funcdef}{MPI\_GRAPH\_CREATE(comm\_old, nnodes, index, edges, reorder, comm\_graph)} | |
@@ -428,6 +437,8 @@ | |
\end{implementors} | |
\subsection{Distributed Graph Constructor} | |
+\mpitermtitleindex{distributed graph -- topology} | |
+\mpitermtitleindex{topology!distributed graph} | |
\label{subsec:topol-distgraph-constructor} % Sect. 7.5.3a p.247 | |
\mpifunc{MPI\_GRAPH\_CREATE} requires that each process passes the | |
@@ -1412,6 +1423,7 @@ | |
% The full section title is too long for the running head | |
\section[Neighborhood Collective Communication]{Neighborhood Collective Communication on Process Topologies} | |
+\mpitermtitleindexsubmain{neighborhood}{collective communication} | |
\label{sec:sparsecoll} | |
MPI process topologies specify a communication graph, but they | |
@@ -1910,6 +1922,7 @@ | |
operations as described in Section~\ref{sec:nbcoll}. | |
\subsection{Nonblocking Neighborhood Gather} | |
+\mpitermtitleindex{neighborhood collective communication!nonblocking} | |
\begin{funcdef}{MPI\_INEIGHBOR\_ALLGATHER(sendbuf, sendcount, sendtype, | |
recvbuf, recvcount, recvtype, comm, request)} | |
Index: chap-inquiry/inquiry.tex | |
=================================================================== | |
--- chap-inquiry/inquiry.tex (revision 2030) | |
+++ chap-inquiry/inquiry.tex (working copy) | |
@@ -12,6 +12,7 @@ | |
\label{sec:inquiry-impl} | |
\subsection{Version Inquiries} | |
+\mpitermtitleindex{version inquiries} | |
\label{subsec:inquiry-version} | |
In order to cope with changes to the \MPI/ Standard, there are both compile-time | |
and run-time ways to determine which version of the standard is in use in the | |
@@ -52,8 +53,8 @@ | |
\mpifunc{MPI\_GET\_VERSION} can be called | |
before \mpifunc{MPI\_INIT} and after \mpifunc{MPI\_FINALIZE}. | |
-This function is callable from threads without restriction, | |
-see Section~\ref{sec:ei-threads}. | |
+This function must always be thread-safe, as defined in | |
+Section~\ref{sec:ei-threads}. | |
Valid (\const{MPI\_VERSION}, \const{MPI\_SUBVERSION}) pairs in | |
this and previous versions of the \MPI/ standard | |
are (3,1), (3,0), (2,2), (2,1), (2,0), and (1,2). | |
@@ -93,10 +94,11 @@ | |
\mpifunc{MPI\_GET\_LIBRARY\_VERSION} can be called | |
before \mpifunc{MPI\_INIT} and after \mpifunc{MPI\_FINALIZE}. | |
-This function is callable from threads without restriction, | |
-see Section~\ref{sec:ei-threads}. | |
+This function must always be thread-safe, as defined in | |
+Section~\ref{sec:ei-threads}. | |
\subsection{Environmental Inquiries} | |
+\mpitermtitleindex{environmental inquiries} | |
\label{subsec:inquiry-inquiry} | |
A set of attributes that describe the execution environment are attached to | |
@@ -136,6 +138,7 @@ | |
The required parameter values are discussed in more detail below: | |
\subsubsection{Tag Values} | |
+\mpitermtitleindex{tag values} | |
Tag values range from \code{0} to the value returned for \const{MPI\_TAG\_UB}, | |
inclusive. | |
These values are guaranteed to be unchanging during the execution of an \MPI/ | |
@@ -150,17 +153,19 @@ | |
of \const{MPI\_COMM\_WORLD}. | |
\subsubsection{Host Rank} | |
-The value returned for \const{MPI\_HOST} gets the rank of the \mpiterm{HOST} process in the group associated | |
+\mpitermtitleindex{host rank} | |
+The value returned for \const{MPI\_HOST} gets the rank of the \mpitermni{HOST} process in the group associated | |
with communicator \const{MPI\_COMM\_WORLD}, if there is such. | |
\const{MPI\_PROC\_NULL} is returned if there is no host. | |
\MPI/ does not specify what it | |
-means for a process to be a \mpiterm{HOST}, nor does it requires that a \mpiterm{HOST} | |
+means for a process to be a \mpitermni{HOST}, nor does it requires that a \mpitermni{HOST} | |
exists. | |
The attribute \const{MPI\_HOST} has the same value on all processes | |
of \const{MPI\_COMM\_WORLD}. | |
\subsubsection{IO Rank} | |
+\mpitermtitleindex{IO rank} | |
The value returned for \const{MPI\_IO} is the rank of a processor that can | |
provide language-standard I/O facilities. For Fortran, this means that all of | |
the Fortran I/O operations are supported (e.g., \code{OPEN}, \code{REWIND}, | |
@@ -187,6 +192,7 @@ | |
\end{users} | |
\subsubsection{Clock Synchronization} | |
+\mpitermtitleindex{clock synchronization} | |
\label{subsubsec:inquiry-clock-sync} | |
The value returned for \const{MPI\_WTIME\_IS\_GLOBAL} is 1 if clocks | |
@@ -210,6 +216,7 @@ | |
processes of \const{MPI\_COMM\_WORLD}. | |
\subsubsection{Inquire Processor Name} | |
+\mpitermtitleindex{processor name} | |
\begin{funcdef}{MPI\_GET\_PROCESSOR\_NAME( name, resultlen )} | |
\funcarg{\OUT}{name}{A unique specifier for the actual (as | |
opposed to virtual) node.} | |
@@ -259,6 +266,7 @@ | |
\end{users} | |
\section{Memory Allocation} | |
+\mpitermtitleindex{memory!allocation} | |
\label{sec:misc-memalloc} | |
In some systems, message-passing and remote-memory-access (\RMA/) operations | |
@@ -455,6 +463,7 @@ | |
\end{example} | |
\section{Error Handling} | |
+\mpitermtitleindex{error handling} | |
\label{sec:errorhandler} | |
An \MPI/ implementation cannot or may choose not to handle some errors | |
@@ -462,7 +471,7 @@ | |
exceptions or traps, such as floating point errors or access | |
violations. | |
The set of errors that are handled by \MPI/ is implementation-dependent. | |
-Each such error generates an \mpiterm{\MPI/ exception}. | |
+Each such error generates an \mpitermdefni{\MPI/ exception}\mpitermdefindex{exception}. | |
The above text takes precedence over any text on error handling within this | |
document. Specifically, text that states that errors \emph{will} be handled | |
@@ -595,6 +604,7 @@ | |
%new stuff collecting types of error handlers by Marc | |
\subsection{Error Handlers for Communicators} | |
+\mpitermtitleindex{error handling!error handlers} | |
\label{subsec:inquiry-errhdlr-comm} | |
\begin{funcdef}{MPI\_COMM\_CREATE\_ERRHANDLER(comm\_errhandler\_fn, errhandler)} | |
@@ -603,7 +613,8 @@ | |
\end{funcdef} | |
\cdeclmainindex{MPI\_Errhandler}% | |
-\mpibind{MPI\_Comm\_create\_errhandler(MPI\_Comm\_errhandler\_function~*comm\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
+%% No tie in the first argument-it makes line breaking impossible | |
+\mpibind{MPI\_Comm\_create\_errhandler(MPI\_Comm\_errhandler\_function *comm\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
\mpifnewbind{MPI\_Comm\_create\_errhandler(comm\_errhandler\_fn, errhandler, ierror) \fargs PROCEDURE(MPI\_Comm\_errhandler\_function) :: comm\_errhandler\_fn \\ TYPE(MPI\_Errhandler), INTENT(OUT) :: errhandler \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
\mpifbind{MPI\_COMM\_CREATE\_ERRHANDLER(COMM\_ERRHANDLER\_FN, ERRHANDLER, IERROR)\fargs EXTERNAL COMM\_ERRHANDLER\_FN \\ INTEGER ERRHANDLER, IERROR} | |
@@ -703,7 +714,8 @@ | |
\end{funcdef} | |
\cdeclindex{MPI\_Errhandler}% | |
-\mpibind{MPI\_Win\_create\_errhandler(MPI\_Win\_errhandler\_function~*win\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
+%% No tie in the first argument-it makes line breaking impossible | |
+\mpibind{MPI\_Win\_create\_errhandler(MPI\_Win\_errhandler\_function *win\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
\mpifnewbind{MPI\_Win\_create\_errhandler(win\_errhandler\_fn, errhandler, ierror) \fargs PROCEDURE(MPI\_Win\_errhandler\_function) :: win\_errhandler\_fn \\ TYPE(MPI\_Errhandler), INTENT(OUT) :: errhandler \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
\mpifbind{MPI\_WIN\_CREATE\_ERRHANDLER(WIN\_ERRHANDLER\_FN, ERRHANDLER, IERROR) \fargs EXTERNAL WIN\_ERRHANDLER\_FN \\ INTEGER ERRHANDLER, IERROR} | |
@@ -777,7 +789,8 @@ | |
\end{funcdef} | |
\cdeclindex{MPI\_Errhandler}% | |
-\mpibind{MPI\_File\_create\_errhandler(MPI\_File\_errhandler\_function~*file\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
+%% No tie in the first argument-it makes line breaking impossible | |
+\mpibind{MPI\_File\_create\_errhandler(MPI\_File\_errhandler\_function *file\_errhandler\_fn, MPI\_Errhandler~*errhandler)} | |
\mpifnewbind{MPI\_File\_create\_errhandler(file\_errhandler\_fn, errhandler, ierror) \fargs PROCEDURE(MPI\_File\_errhandler\_function) :: file\_errhandler\_fn \\ TYPE(MPI\_Errhandler), INTENT(OUT) :: errhandler \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
\mpifbind{MPI\_FILE\_CREATE\_ERRHANDLER(FILE\_ERRHANDLER\_FN, ERRHANDLER, IERROR)\fargs EXTERNAL FILE\_ERRHANDLER\_FN \\ INTEGER ERRHANDLER, IERROR} | |
@@ -899,6 +912,7 @@ | |
\end{rationale} | |
\section{Error Codes and Classes} | |
+\mpitermtitleindex{error handling!error codes and classes} | |
\label{sec:ei-error-classes} | |
The error codes returned by \MPI/ are left entirely to the | |
implementation (with the | |
@@ -1083,6 +1097,8 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\section{Error Classes, Error Codes, and Error Handlers} | |
+\mpitermtitleindex{error handling!error codes and classes} | |
+\mpitermtitleindex{error handling!error handlers} | |
\label{sec:ei-error} | |
Users may want to write a layered library on top of an existing \MPI/ | |
@@ -1325,6 +1341,7 @@ | |
\section{Timers and Synchronization} | |
+\mpitermtitleindex{timers and synchronization} | |
\MPI/ defines a timer. A timer is specified even though it is not | |
``message-passing,'' because timing parallel programs is important in | |
``performance debugging'' and because existing timers (both in POSIX | |
@@ -1387,6 +1404,7 @@ | |
\section{Startup} | |
+\mpitermtitleindex{startup} | |
\label{sec:inquiry-startup} | |
\label{sec:misc-init} | |
@@ -1446,7 +1464,7 @@ | |
about the execution environment by querying the predefined info object | |
\const{MPI\_INFO\_ENV}. | |
The following keys are predefined for this object, corresponding to the | |
-arguments of \mpifunc{MPI\_COMM\_SPAWN} or of \mpifunc{mpiexec}: | |
+arguments of \mpifunc{MPI\_COMM\_SPAWN} or of \mpifunc{mpiexec}\mpitermindex{mpiexec}: | |
\begin{description} | |
\item[\infokey{command}] Name of program executed. | |
\item[\infokey{argv}] Space separated arguments to command. | |
@@ -1744,8 +1762,8 @@ | |
called does not affect the behavior of \mpifunc{MPI\_INITIALIZED}. | |
It is one of the few routines that may be called before | |
\mpifunc{MPI\_INIT} is called. | |
-This function is callable from threads without restriction, | |
-see Section~\ref{sec:ei-threads}. | |
+This function must always be thread-safe, as defined in | |
+Section~\ref{sec:ei-threads}. | |
\begin{funcdef}{MPI\_ABORT(comm, errorcode)} | |
\funcarg{\IN}{comm}{communicator of tasks to abort} | |
@@ -1783,10 +1801,12 @@ | |
\begin{users} | |
Whether the \mpiarg{errorcode} is returned from the executable or from the | |
\mpifuncindex{mpiexec}% | |
+\mpitermindex{mpiexec}% | |
\MPI/ process startup mechanism (e.g., \code{mpiexec}), is an aspect of quality | |
of the \MPI/ library but not mandatory. | |
\end{users} | |
\mpifuncindex{mpiexec}% | |
+\mpitermindex{mpiexec}% | |
\begin{implementors} | |
Where possible, a high-quality implementation will try to return the | |
\mpiarg{errorcode} from the \MPI/ process startup mechanism | |
@@ -1794,6 +1814,7 @@ | |
\end{implementors} | |
\subsection{Allowing User Functions at Process Termination} | |
+\mpitermtitleindex{user functions at process termination} | |
\label{subsec:inquiry-startup-userfunc} | |
There are times in which it would be convenient to have actions happen | |
@@ -1830,6 +1851,7 @@ | |
\subsection{Determining Whether \texorpdfstring{\mpi/}{MPI} Has Finished} | |
+\mpitermtitleindex{finished} | |
One of the goals of \mpi/ was to allow for layered libraries. In | |
order for a library to do this cleanly, it needs to know if \mpi/ is | |
@@ -1854,8 +1876,8 @@ | |
This routine returns \mpiarg{true} if \mpifunc{MPI\_FINALIZE} has completed. | |
It is valid to call \mpifunc{MPI\_FINALIZED} | |
before \mpifunc{MPI\_INIT} and after \mpifunc{MPI\_FINALIZE}. | |
-This function is callable from threads without restriction, | |
-see Section~\ref{sec:ei-threads}. | |
+This function must always be thread-safe, as defined in | |
+Section~\ref{sec:ei-threads}. | |
\begin{users} | |
\mpi/ is ``active'' and it is thus safe to call \mpi/ functions if | |
@@ -1869,6 +1891,7 @@ | |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
\section{Portable \texorpdfstring{\MPI/}{MPI} Process Startup} | |
+\mpitermtitleindex{startup!portable} | |
A number of implementations of \mpi/ provide a startup command for \MPI/ programs | |
that is of the form | |
@@ -1893,6 +1916,7 @@ | |
In order that the ``standard'' command not be confused with existing | |
practice, which is not standard and not portable among implementations, | |
\mpifuncindex{mpirun}% | |
+\mpitermindex{mpirun}% | |
\mpifuncindex{mpiexec}% | |
instead of \code{mpirun} \MPI/ specifies \code{mpiexec}. | |
@@ -1906,6 +1930,7 @@ | |
+\mpitermdefindex{mpiexec}% | |
It is suggested that\mpifuncmainindex{mpiexec} | |
%%HEADER | |
%%SKIP | |
Index: MAKE-FUNC-INDEX | |
=================================================================== | |
--- MAKE-FUNC-INDEX (revision 2030) | |
+++ MAKE-FUNC-INDEX (working copy) | |
@@ -39,6 +39,8 @@ | |
touch temp | |
chmod +w temp | |
+create_index 'General Index' '-e' '\{TERM:' 's/TERM://' \ | |
+ 'This index lists mainly terms of the \MPI/ specification. The underlined page numbers refer to the definitions or parts of the definition of the terms. Bold face numbers mark section titles.' | |
create_index 'Examples Index' '-e' '\{EXAMPLES:' 's/EXAMPLES://' \ | |
'This index lists code examples throughout the text. Some examples are referred to by content; others are listed by the major \MPI/ function that they are demonstrating. \MPI/ functions listed in all capital letter are Fortran examples; \MPI/ functions listed in mixed case are C examples.' | |
create_index 'MPI Constant and Predefined Handle Index' '-e' '\{CONST:MPI[^|}]*[ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789][|}]' 's/CONST://' \ | |
Index: Makefile | |
=================================================================== | |
--- Makefile (revision 2030) | |
+++ Makefile (working copy) | |
@@ -505,7 +505,7 @@ | |
# function definition within the standard. Without this step, there is no | |
# change to the text | |
-bindinglinks: mpi-report.idx \ | |
+bindinglinks: mpi-report.idx indextomap.pl \ | |
appLang-CNames.tex appLang-FNames.tex appLang-F2008Names.tex | |
if [ -x $(MAPNAMES) ] ; then \ | |
./indextomap.pl mpi-report.idx > map.cit ; \ | |
Index: mpi-user-macs.tex | |
=================================================================== | |
--- mpi-user-macs.tex (revision 2030) | |
+++ mpi-user-macs.tex (working copy) | |
@@ -347,9 +347,39 @@ | |
% | |
% Use mpiterm when introducing a term that you want emphasized and indexed. | |
% Use mpitermni for terms that should not be indexed (ni = not indexed) | |
-\def\mpiterm#1{\emph{#1}\index{#1}} | |
-\def\mpitermni#1{\emph{#1}} | |
+% Place the index first to allow emph to (possibly) correct spacing. | |
+% mpitermdefindex produces only an index entry. The combination of ni and | |
+% index allows index entries that differ from the text. | |
+% References to section titles are marked with titleindex. | |
+% | |
+% An attempt was made to detect whether a term was the first use. However, | |
+% the presense of \_ in some of the terms caused the simple code (using | |
+% \csname #1 \endcsname) to fail, and code to sanitize the argument is | |
+% complex. So we fell back to the simple choice here. | |
+\def\mpiterm#1{\index{TERM:#1}\emph{#1}} | |
+\def\mpitermni#1{\index{TERMnoindex:#1}\emph{#1}} | |
+\def\mpitermindex#1{\index{TERM:#1}} | |
+\def\mpitermdef#1{\textbf{#1}\index{TERM:#1|uu}} | |
+\def\mpitermdefni#1{\textbf{#1}\index{TERMnoindex:#1|uu}} | |
+\def\mpitermdefindex#1{\index{TERM:#1|uu}} | |
+% Special macro for lb\_marker and ub\_marker that are a term | |
+% which is a very special catgory and should be printed with sf | |
+\def\mpiublb#1{\textsf{#1}\index{TERM:#1}} | |
+ | |
+\def\mpitermtitleindex#1{\index{TERM:#1|bold}} | |
+\def\mpitermtitleindexsubmain#1#2{\index{TERM:#1 #2|bold}\index{TERM:#2!#1|bold}} | |
+% e.g. \mpitermtitleindexsubmain{Point-to-Point}{Communication} | |
+% results in: Point-to-Point Communication, 23 | |
+% Communication, | |
+% Point-to-Point, 23 | |
+\def\mpitermtitleindexmainsub#1#2{\index{TERM:#1!#2|bold}\index{TERM:#2|bold}} | |
+% e.g. \mpitermtitleindexmainsub{Message}{Envelope} | |
+% results in: Message | |
+% Envelope, 27 | |
+% Envelope, 27 | |
+ | |
+ | |
% | |
% Use flushline to force a linebreak without right justifying the line. | |
\def\flushline{\hfill\hbox{}\linebreak} | |
@@ -505,8 +535,10 @@ | |
%% | |
%% Language-independent code block environment | |
+%% This needs to be used with care. If it is used inline, follow it with | |
+%% \noindent to keep TeX from starting a new paragraph. | |
\newenvironment{mpicodeblock}{\ifvmode\else\par\fi\vspace{\codeSpace}% | |
-\noindent\sf}{} | |
+\noindent\sf\quad}{\ifvmode\else\par\fi\vspace{\codeSpace}} | |
%% | |
%% Use \XXX/ for a ``function name'' wildcard | |
Index: chap-changes/changes.tex | |
=================================================================== | |
--- chap-changes/changes.tex (revision 2030) | |
+++ chap-changes/changes.tex (working copy) | |
@@ -1,4 +1,5 @@ | |
\chapter{Change-Log} | |
+\mpitermtitleindex{change-log} | |
\label{sec:change} | |
\label{chap:change} | |
@@ -63,7 +64,8 @@ | |
\MPIIIIDOTO/ Chapters 3-17, Annex A.3 on page 707, and Example 5.21 on page 187. | |
\newline | |
Within the \code{mpi\_f08} Fortran support method, \code{BIND(C)} was removed from | |
-all \code{SUBROUTINE}, \code{FUNCTION}, and \code{ABSTRACT INTERFACE} definitions. | |
+all \flushline | |
+\code{SUBROUTINE}, \code{FUNCTION}, and \code{ABSTRACT INTERFACE} definitions. | |
% 02.--- MPI-3.0-erratum Ticket 415 | |
\item | |
@@ -130,11 +132,14 @@ | |
% 09.--- MPI-3.0-erratum Ticket 362 | |
\item | |
-Section~\ref{sec:winalloc} on page~\pageref{sec:winalloc}, and | |
+Section~\ref{chap:one-side-2:win_create} on page~\pageref{chap:one-side-2:win_create}, and | |
\MPIIIIDOTO/ Section~11.2.2 on page 407. | |
\newline | |
-The \infokey{same\_size} info key can be used with all window flavors. | |
+The \infokey{same\_size} info key can be used with all window flavors, | |
+and requires that all processes in the process group of the communicator | |
+have provided this info key with the same value. | |
+ | |
% 10.--- MPI-3.0-erratum Ticket 350 | |
\item | |
Section~\ref{sec:1sided-accumulate} on page~\pageref{sec:1sided-accumulate}, and | |
@@ -268,7 +273,7 @@ | |
\mpifunc{MPI\_QUERY\_THREAD}, \mpifunc{MPI\_IS\_THREAD\_MAIN}, | |
\mpifunc{MPI\_GET\_VERSION}, and \mpifunc{MPI\_GET\_LIBRARY\_VERSION} | |
are callable from threads without restriction (in the sense of | |
-MPI\_THREAD\_MULTIPLE), irrespective of the actual level of thread support | |
+\const{MPI\_THREAD\_MULTIPLE}), irrespective of the actual level of thread support | |
provided, in the case where the implementation supports threads. | |
% 03.--- MPI-3.1 Ticket 369 | |
@@ -282,8 +287,10 @@ | |
Sections~\ref{sec:io-explicit} and~\ref{sec:io-indiv-ptr} | |
on pages~\pageref{sec:io-explicit} and~\pageref{sec:io-indiv-ptr}. | |
\newline | |
-Added \mpifunc{MPI\_File\_iread\_at\_all}, \mpifunc{MPI\_File\_iwrite\_at\_all}, | |
-\mpifunc{MPI\_File\_iread\_all}, and \mpifunc{MPI\_File\_iwrite\_all} | |
+%% WDG - Corrected to refer to the language-neutral names, as required | |
+%% by the standard and the mpifunc macro | |
+Added \mpifunc{MPI\_FILE\_IREAD\_AT\_ALL}, \mpifunc{MPI\_FILE\_IWRITE\_AT\_ALL}, | |
+\mpifunc{MPI\_FILE\_IREAD\_ALL}, and \mpifunc{MPI\_FILE\_IWRITE\_ALL} | |
% 05.--- MPI-3.1 Ticket 378 | |
\item | |
@@ -372,9 +379,9 @@ | |
\item | |
Sections~\ref{subsec:pt2pt-messagedata}, | |
\ref{coll-predefined-op}, | |
-\ref{subsec:ext32} Table \ref{table:io:extsizes}, | |
+\ref{subsec:ext32} Table~\ref{table:io:extsizes}, | |
%-REMOVED-C++ \ref{sec:c++datatypes} Table \ref{tab:cpp-basic-datatypes}, | |
-and Annex \ref{subsec:annexa-const} | |
+and Annex~\ref{subsec:annexa-const} | |
on pages~\pageref{subsec:pt2pt-messagedata}, | |
\pageref{coll-predefined-op}, | |
\pageref{table:io:extsizes}, | |
@@ -511,7 +518,7 @@ | |
% 06.--- MPI-3.0 Ticket 265, 1st entry | |
\item | |
-Sections \ref{subsec:count}, \ref{subsec:pt2pt-messagedata}, | |
+Sections~\ref{subsec:count}, \ref{subsec:pt2pt-messagedata}, | |
\ref{table:pttopt:datatypes:c_f}, \ref{coll-predefined-op}, | |
on pages~\pageref{subsec:count}, \pageref{subsec:pt2pt-messagedata}, | |
\pageref{table:pttopt:datatypes:c_f}, \pageref{coll-predefined-op}, | |
@@ -525,8 +532,8 @@ | |
\pageref{subsec:pt2pt-true-extent}, % MPI_TYPE_GET_TRUE_EXTENT_X | |
\pageref{subsec:pt2pt-datatypeuse}, % MPI_GET_EXTENTS_X | |
\pageref{func:mpi-status-set-elements-x}, % MPI_STATUS_SET_ELEMENTS_X | |
-and Annex | |
-\ref{subsec:annexa-const} on page \pageref{subsec:annexa-const}. | |
+and | |
+Annex~\ref{subsec:annexa-const} on page \pageref{subsec:annexa-const}. | |
\newline | |
New inquiry functions, \mpifunc{MPI\_TYPE\_SIZE\_X}, \mpifunc{MPI\_TYPE\_GET\_EXTENT\_X}, | |
\mpifunc{MPI\_TYPE\_GET\_TRUE\_EXTENT\_X}, and \mpifunc{MPI\_GET\_ELEMENTS\_X}, return their results | |
Index: chap-pt2pt/pt2pt.tex | |
=================================================================== | |
--- chap-pt2pt/pt2pt.tex (revision 2030) | |
+++ chap-pt2pt/pt2pt.tex (working copy) | |
@@ -1,4 +1,5 @@ | |
\chapter{Point-to-Point Communication} | |
+\mpitermtitleindexsubmain{point-to-point}{communication} | |
\label{sec:pt2pt} | |
\label{chap:pt2pt} | |
@@ -7,8 +8,8 @@ | |
Sending and receiving of messages by processes is the basic \MPI/ | |
communication mechanism. | |
-The basic point-to-point communication operations are \mpiterm{send} and | |
-\mpiterm{receive}. Their use is illustrated in the example below. | |
+The basic point-to-point communication operations are \mpitermdef{send} and | |
+\mpitermdef{receive}. Their use is illustrated in the example below. | |
%%HEADER | |
%%LANG: C | |
@@ -40,26 +41,27 @@ | |
In this example, process zero (\code{myrank = 0}) sends a message to process one | |
using the | |
-\mpiterm{send} operation \mpifunc{MPI\_SEND}. The | |
-operation specifies a \mpiterm{send buffer} in the sender memory from which the | |
+\mpitermdef{send} operation \mpifunc{MPI\_SEND}. The | |
+operation specifies a \mpitermdefni{send buffer}\mpitermdefindex{send!buffer} in the sender memory from which the | |
message data is taken. In the example above, the send buffer consists of the | |
-storage containing the variable \mpiterm{message} in the memory of process zero. | |
+storage containing the variable \mpiarg{message}\mpitermindex{message} in the memory of process zero. | |
The location, size and type of the send buffer are specified by the first three | |
parameters of the send operation. The message sent will contain the 13 | |
characters of this variable. | |
-In addition, the send operation associates an \mpiterm{envelope} with the | |
+In addition, the send operation associates an \mpitermdef{envelope} with the | |
message. This envelope specifies the message destination and contains | |
-distinguishing information that can be used by the \mpiterm{receive} operation to | |
+distinguishing information that can be used by the \mpitermdef{receive} operation to | |
select a particular message. | |
The last three parameters of the send operation, along with the rank of the | |
sender, | |
specify the envelope for the message sent. | |
Process one (\code{myrank = 1}) receives this message with the | |
-\mpiterm{receive} operation \mpifunc{MPI\_RECV}. | |
+\mpitermdef{receive} operation \mpifunc{MPI\_RECV}. | |
The message to be received is selected according to the value of its | |
-envelope, and the message data is stored into the \mpiterm{receive | |
-buffer}. In the example above, the receive buffer consists of the storage | |
-containing the string \code{message} in the memory of process one. | |
+envelope, and the message data is stored into the | |
+\mpitermdefni{receive buffer}\mpitermdefindex{receive!buffer}. | |
+In the example above, the receive buffer consists of the storage | |
+containing the string \mpiarg{message} in the memory of process one. | |
The first three parameters of the receive operation specify the location, size | |
and type of the receive buffer. The next three | |
parameters are used for selecting the incoming message. The last parameter is | |
@@ -78,6 +80,7 @@ | |
\section{Blocking Send and Receive Operations} | |
\label{sec:pt2pt-basicsendrecv} | |
\subsection{Blocking Send} | |
+\mpitermtitleindex{send} | |
\label{subsec:pt2pt-basicsend} | |
The syntax of the blocking send operation is given below. | |
@@ -101,6 +104,7 @@ | |
The blocking semantics of this call are described in Section~\ref{sec:pt2pt-modes}. | |
\subsection{Message Data} | |
+\mpitermtitleindexmainsub{message}{data} | |
\label{subsec:pt2pt-messagedata} | |
@@ -306,12 +310,14 @@ | |
\subsection{Message Envelope} | |
+\mpitermtitleindex{message!envelope} | |
+\mpitermtitleindex{envelope} | |
\label{subsec:pt2pt-envelope} | |
In addition to the data part, messages carry information that can be used to | |
distinguish messages and selectively receive them. This information consists | |
of a fixed number of fields, which we collectively call | |
-the \mpiterm{message envelope}. These fields are | |
+the \mpitermdefni{message envelope}. These fields are | |
\begin{center} | |
source \\ | |
destination \\ | |
@@ -336,7 +342,7 @@ | |
described in Chapter~\ref{chap:environment}. \MPI/ requires that | |
\mpicode{UB} be no less than 32767. | |
-The \mpiarg{comm} argument specifies the \mpiterm{communicator} that is used for | |
+The \mpiarg{comm} argument specifies the \mpitermdef{communicator} that is used for | |
the send operation. | |
Communicators are explained in Chapter~\ref{chap:context}; below is a brief | |
summary of their usage. | |
@@ -348,7 +354,7 @@ | |
sent, and messages sent in different contexts do not interfere. | |
The communicator also specifies the set of processes that share this | |
-communication context. This \mpiterm{process group} | |
+communication context. This \mpitermdef{process group} | |
is ordered and processes are identified by their | |
rank within this group. Thus, the range of valid values for \mpiarg{dest} is | |
$0, \ldots, n-1 \cup \{\const{MPI\_PROC\_NULL}\}$, where $n$ is the number of | |
@@ -386,6 +392,7 @@ | |
\end{implementors} | |
\subsection{Blocking Receive} | |
+\mpitermtitleindex{receive} | |
\label{subsec:pt2pt-basicreceive} | |
@@ -474,7 +481,7 @@ | |
with that same communicator (remote process group, for intercommunicators). | |
Thus, the range of valid values for the | |
\mpiarg{source} argument is | |
-\{$0,\ldots,n-1\}\cup\{\const{MPI\_ANY\_SOURCE}\},\cup\{\const{MPI\_PROC\_NULL}\}$, where | |
+\{$0,\ldots,n-1\}\cup\{\const{MPI\_ANY\_SOURCE}\}\cup\{\const{MPI\_PROC\_NULL}\}$, where | |
$n$ is the number of processes in this group. | |
Note the asymmetry between send and receive operations: A receive | |
@@ -503,6 +510,7 @@ | |
\sectionref{sec:pt2pt-nullproc}. | |
\subsection{Return Status} | |
+\mpitermtitleindex{status} | |
\label{subsec:pt2pt-status} | |
The source or tag of a received message may not be known if wildcard | |
@@ -667,6 +675,7 @@ | |
\mpifunc{MPI\_RECV} operations described in this section. | |
\subsection{Passing \texorpdfstring{\const{MPI\_STATUS\_IGNORE}}{MPI\_STATUS\_IGNORE} for Status} | |
+\mpitermtitleindex{status!ignore} | |
\label{sec:pt2pt-status-ignore} | |
Every call to \mpifunc{MPI\_RECV} includes a \mpiarg{status} argument, wherein | |
@@ -738,6 +747,7 @@ | |
\section{Data Type Matching and Data Conversion} | |
\label{sec:pt2pt-typematch} | |
\subsection{Type Matching Rules} | |
+\mpitermtitleindexsubmain{type}{matching} | |
\label{subsec:pt2pt-typematch} | |
One can think of message transfer as consisting of the following three phases. | |
@@ -978,6 +988,8 @@ | |
\subsection{Data Conversion} | |
+\mpitermtitleindex{data conversion} | |
+\mpitermtitleindex{conversion} | |
\label{subsec:pt2pt-conversion} | |
One of the goals of \MPI/ is to support parallel computations across | |
@@ -1076,9 +1088,10 @@ | |
\section{Communication Modes} | |
+\mpitermtitleindexmainsub{communication}{modes} | |
\label{sec:pt2pt-modes} | |
The send call described in Section~\ref{subsec:pt2pt-basicsend} | |
-is \mpiterm{blocking}: | |
+is \mpitermdef{blocking}: | |
it does not return until the message data | |
and envelope have been safely stored away so that the sender is | |
free to modify | |
@@ -1098,7 +1111,7 @@ | |
The send call described in Section~\ref{subsec:pt2pt-basicsend} | |
uses | |
-the \mpiterm{standard} communication mode. In this mode, | |
+the \mpitermdef{standard} communication mode. In this mode, | |
it is up to \MPI/ to decide whether outgoing | |
messages will be buffered. \MPI/ may | |
buffer outgoing messages. In such a case, the send call may complete | |
@@ -1130,7 +1143,7 @@ | |
There are three additional communication modes. | |
-A \mpiterm{buffered} mode send operation can be started whether or not a | |
+A \mpitermdef{buffered} mode send operation can be started whether or not a | |
matching receive has been posted. | |
It may complete before a matching receive is posted. However, unlike | |
the standard send, this operation is \mpiterm{local}, and its | |
@@ -1142,7 +1155,7 @@ | |
Buffer allocation by the user may be required for the buffered mode to be | |
effective. | |
-A send that uses the \mpiterm{synchronous} mode can be started whether or | |
+A send that uses the \mpitermdef{synchronous} mode can be started whether or | |
not a matching receive was posted. However, the send will complete | |
successfully only if a matching receive is posted, and the | |
receive operation has started to receive the message sent by the | |
@@ -1158,7 +1171,7 @@ | |
at either end before both processes rendezvous at the | |
communication. A send executed in this mode is \mpiterm{non-local}. | |
-A send that uses the \mpiterm{ready} communication mode | |
+A send that uses the \mpitermdef{ready} communication mode | |
may be started \emph{only} if the matching receive is already posted. | |
Otherwise, the operation is erroneous and its outcome is undefined. | |
On some systems, this allows the removal of a hand-shake | |
@@ -1307,6 +1320,7 @@ | |
\section{Semantics of Point-to-Point Communication} | |
+\mpitermtitleindex{semantics!point-to-point communication} | |
\label{sec:pt2pt-semantics} | |
A valid \MPI/ implementation guarantees certain general properties of | |
@@ -1415,7 +1429,7 @@ | |
\paragraph*{Fairness} | |
-\MPI/ makes no guarantee of \emph{fairness} in the handling of | |
+\MPI/ makes no guarantee of \mpiterm{fairness} in the handling of | |
communication. Suppose that a send is posted. Then it is possible | |
that the destination process repeatedly posts a receive that matches this | |
send, yet the message is never received, because it is each time overtaken by | |
@@ -1602,6 +1616,7 @@ | |
\end{users} | |
\section{Buffer Allocation and Usage} | |
+\mpitermtitleindex{buffer allocation} | |
\label{sec:pt2pt-buffer} | |
A user may specify a buffer to be used for buffering messages sent in buffered | |
@@ -1783,6 +1798,7 @@ | |
\end{itemize} | |
\section{Nonblocking Communication} | |
+\mpitermtitleindex{nonblocking} | |
\label{sec:pt2pt-nonblock} | |
One can improve performance on many systems by overlapping | |
@@ -1790,23 +1806,23 @@ | |
where communication can be executed autonomously by an intelligent | |
communication controller. Light-weight threads are one mechanism for | |
achieving such overlap. An alternative mechanism that often leads to | |
-better performance is to use \mpiterm{nonblocking communication}. A | |
-nonblocking \mpiterm{send start} call initiates the send operation, but does not | |
+better performance is to use \mpitermdefni{nonblocking communication}\mpitermdefindex{nonblocking!communication}. A | |
+nonblocking \mpitermdefni{send start}\mpitermdefindex{send!start} call initiates the send operation, but does not | |
complete it. The send start call | |
can | |
return before the message was copied out of the send buffer. | |
-A separate \mpiterm{send complete} | |
+A separate \mpitermdefni{send complete}\mpitermdefindex{send!complete} | |
call is needed to complete the communication, i.e., to verify that the | |
data has been copied out of the send buffer. With | |
suitable hardware, the transfer of data out of the sender memory | |
may proceed concurrently with computations done at the sender after | |
the send was initiated and before it completed. | |
-Similarly, a nonblocking \mpiterm{receive start call} initiates the receive | |
+Similarly, a nonblocking \mpitermdefni{receive start call}\mpitermdefindex{receive!start call} initiates the receive | |
operation, but does not complete it. The call | |
can | |
return before | |
-a message is stored into the receive buffer. A separate \mpiterm{receive | |
-complete} call | |
+a message is stored into the receive buffer. A separate | |
+\mpitermdefni{receive complete}\mpitermdefindex{receive!complete} call | |
is needed to complete the receive operation and verify that the data has | |
been received into the receive buffer. | |
With suitable hardware, the transfer of data into the receiver memory | |
@@ -1820,7 +1836,7 @@ | |
\mpiterm{ready}. These carry | |
the same meaning. | |
Sends of all modes, \mpiterm{ready} excepted, can be started whether a matching | |
-receive has been posted or not; a nonblocking \mpiterm{ready} | |
+receive has been posted or not; a nonblocking \mpitermdefni{ready}\mpitermdefindex{ready!nonblocking} | |
send can be started only if | |
a matching receive is posted. In all cases, the send start call | |
is local: it returns immediately, irrespective of the | |
@@ -1836,7 +1852,7 @@ | |
send buffer. | |
It may carry additional meaning, depending on the send mode. | |
-If the send mode is \mpiterm{synchronous}, then the | |
+If the send mode is \mpitermdefni{synchronous}\mpitermdefindex{synchronous!nonblocking}, then the | |
send can complete only if a matching receive has started. That | |
is, a receive has | |
been posted, and has been matched with the send. In this case, | |
@@ -1846,13 +1862,13 @@ | |
``knows'' the transfer will complete, but before the receiver ``knows'' the | |
transfer will complete.) | |
-If the send mode is \mpiterm{buffered} then the | |
+If the send mode is \mpitermdefni{buffered}\mpitermdefindex{buffered!nonblocking} then the | |
message must be buffered if there is no pending receive. In this case, | |
the send-complete | |
call is local, and must succeed irrespective of the status of a matching | |
receive. | |
-If the send mode is \mpiterm{standard} then the send-complete call may | |
+If the send mode is \mpitermdefni{standard}\mpitermdefindex{standard!nonblocking} then the send-complete call may | |
return before a matching receive | |
is posted, | |
if the message is buffered. On the other hand, the | |
@@ -1892,9 +1908,10 @@ | |
\end{users} | |
\subsection{Communication Request Objects} | |
+\mpitermtitleindexmainsub{nonblocking}{request objects} | |
\label{subsec:pt2pt-commobject} | |
-Nonblocking communications use opaque \mpiterm{request} objects to | |
+Nonblocking communications use opaque \mpitermdefni{request} objects to | |
identify communication operations and match the operation that | |
initiates the communication with the operation that terminates it. | |
These are system objects that are accessed via a handle. | |
@@ -1906,12 +1923,13 @@ | |
information about the status of the pending communication operation. | |
\subsection{Communication Initiation} | |
+\mpitermtitleindexmainsub{nonblocking}{initiation} | |
\label{subsec:pt2pt-commstart} | |
We use the same naming conventions as for blocking communication: a | |
prefix of \mpicode{B}, \mpicode{S}, or \mpicode{R} is used for | |
-\mpiterm{buffered}, \mpiterm{synchronous} or \mpiterm{ready} mode. | |
-In addition a prefix of \mpicode{I} (for \mpiterm{immediate}) indicates | |
+\mpitermdef{buffered}, \mpitermdef{synchronous} or \mpitermdef{ready} mode. | |
+In addition a prefix of \mpicode{I} (for \mpitermdef{immediate}) indicates | |
that the call is nonblocking. | |
\begin{funcdef}{MPI\_ISEND(buf, count, datatype, dest, tag, comm, request)} | |
@@ -2067,6 +2085,7 @@ | |
\subsection{Communication Completion} | |
+\mpitermtitleindexmainsub{nonblocking}{completion} | |
\label{subsec:pt2pt-commend} | |
The functions \mpifunc{MPI\_WAIT} and \mpifunc{MPI\_TEST} are used to complete a | |
@@ -2077,7 +2096,7 @@ | |
of the send buffer unchanged). It does not indicate that the | |
message has been received, | |
rather, it may have been buffered by the communication | |
-subsystem. However, if a \mpiterm{synchronous} | |
+subsystem. However, if a \mpitermdef{synchronous} | |
mode send was used, the completion of the | |
send operation indicates that a matching receive was initiated, and that the | |
message will eventually be received by this matching receive. | |
@@ -2089,16 +2108,16 @@ | |
course, that the send was initiated). | |
We shall use the following terminology: | |
-A \mpiterm{null} handle is a handle with | |
+A \mpitermdef{null handle} is a handle with | |
value\flushline | |
\const{MPI\_REQUEST\_NULL}. | |
A persistent | |
-request and the handle to it are \mpiterm{inactive} | |
+request and the handle to it are \mpitermdef{inactive} | |
if the request is not associated with any ongoing | |
communication (see \sectionref{sec:pt2pt-persistent}). | |
-A handle is \mpiterm{active} if it is neither null nor inactive. | |
+A handle is \mpitermdef{active} if it is neither null nor inactive. | |
An | |
-\mpiterm{empty} status is a status which is set to return \mpiarg{tag =} | |
+\mpitermdef{empty} status is a status which is set to return \mpiarg{tag =} | |
\const{MPI\_ANY\_TAG}, \mpiarg{source =} \const{MPI\_ANY\_SOURCE}, \mpiarg{error =} | |
\const{MPI\_SUCCESS}, and is also internally configured so that calls to | |
\mpifunc{MPI\_GET\_COUNT}, \mpifunc{MPI\_GET\_ELEMENTS}, and \mpifunc{MPI\_GET\_ELEMENTS\_X} return | |
@@ -2333,6 +2352,7 @@ | |
\end{example} | |
\subsection{Semantics of Nonblocking Communications} | |
+\mpitermtitleindex{semantics!nonblocking communications} | |
\label{subsec:pt2pt-semantics} | |
@@ -2437,6 +2457,8 @@ | |
send. | |
\subsection{Multiple Completions} | |
+\mpitermtitleindex{multiple completions} | |
+\mpitermtitleindex{completion!multiple} | |
\label{subsec:pt2pt-multiple} | |
It is convenient to be able to wait for the completion of any, some, or all the | |
@@ -2677,7 +2699,7 @@ | |
\mpibind{MPI\_Waitsome(int~incount, MPI\_Request~array\_of\_requests[], int~*outcount, int~array\_of\_indices[], MPI\_Status~array\_of\_statuses[])} | |
\mpifnewbind{MPI\_Waitsome(incount, array\_of\_requests, outcount, array\_of\_indices, array\_of\_statuses, ierror) \fargs INTEGER, INTENT(IN) :: incount \\ TYPE(MPI\_Request), INTENT(INOUT) :: array\_of\_requests(incount) \\ INTEGER, INTENT(OUT) :: outcount, array\_of\_indices(*) \\ TYPE(MPI\_Status) :: array\_of\_statuses(*) \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
-\mpifbind{MPI\_WAITSOME(INCOUNT, ARRAY\_OF\_REQUESTS, OUTCOUNT, ARRAY\_OF\_INDICES,\\\ \ \ \ ARRAY\_OF\_STATUSES, IERROR)\fargs INTEGER INCOUNT, ARRAY\_OF\_REQUESTS(*), OUTCOUNT, ARRAY\_OF\_INDICES(*), ARRAY\_OF\_STATUSES(MPI\_STATUS\_SIZE,*), IERROR} | |
+\mpifbind{MPI\_WAITSOME(INCOUNT, ARRAY\_OF\_REQUESTS, OUTCOUNT, ARRAY\_OF\_INDICES,\\\ \ \ \ ARRAY\_OF\_STATUSES, IERROR)\fargs INTEGER INCOUNT, ARRAY\_OF\_REQUESTS(*), OUTCOUNT, ARRAY\_OF\_INDICES(*),\\\ \ \ \ ARRAY\_OF\_STATUSES(MPI\_STATUS\_SIZE,*), IERROR} | |
\mpicppemptybind{MPI::Request::Waitsome(int~incount, MPI::Request~array\_of\_requests[], int~array\_of\_indices[], MPI::Status~array\_of\_statuses[])}{static int} | |
\mpicppemptybind{MPI::Request::Waitsome(int~incount, MPI::Request~array\_of\_requests[], int~array\_of\_indices[])}{static int} | |
@@ -2748,7 +2770,7 @@ | |
\mpifunc{MPI\_WAITSOME} will | |
block until a communication completes, if it was | |
passed a list that contains at least one active handle. Both calls fulfill a | |
-\mpiterm{fairness} requirement: If a request for a receive repeatedly | |
+\mpitermdef{fairness} requirement: If a request for a receive repeatedly | |
appears in a list of requests passed to \mpifunc{MPI\_WAITSOME} or | |
\mpifunc{MPI\_TESTSOME}, and a matching send has been posted, then the receive | |
will eventually succeed, unless the send is satisfied by another receive; and | |
@@ -2867,6 +2889,7 @@ | |
\subsection{Non-destructive Test of \texorpdfstring{\mpiarg{status}}{status}} | |
+\mpitermtitleindex{status!test} | |
\label{subsec:pt2pt-teststatus} | |
This call is useful for accessing the information associated with a | |
@@ -2921,6 +2944,7 @@ | |
gracefully. | |
\subsection{Probe} | |
+\mpitermtitleindex{probe} | |
\begin{funcdef}{MPI\_IPROBE(source, tag, comm, flag, status)} | |
\funcarg{\IN}{source}{rank of source or \const{MPI\_ANY\_SOURCE} (integer)} | |
@@ -3135,6 +3159,8 @@ | |
\end{implementors} | |
\subsection{Matching Probe} | |
+\mpitermtitleindex{matching probe} | |
+\mpitermtitleindex{probe, matching} | |
\label{sec:matching-probe} | |
The function \mpifunc{MPI\_PROBE} checks for incoming messages without | |
@@ -3239,6 +3265,7 @@ | |
\mpifunc{MPI\_PROBE} and \mpifunc{MPI\_IPROBE}. | |
\subsection{Matched Receives} | |
+\mpitermtitleindex{matched receives} | |
\label{sec:matched-receive} | |
The functions \mpifunc{MPI\_MRECV} and \mpifunc{MPI\_IMRECV} receive | |
@@ -3335,6 +3362,7 @@ | |
\end{implementors} | |
\subsection{Cancel} | |
+\mpitermtitleindex{cancel} | |
\label{sec:cancel} | |
\begin{funcdef}{MPI\_CANCEL(request)} | |
@@ -3447,12 +3475,13 @@ | |
\section{Persistent Communication Requests} | |
+\mpitermtitleindex{persistent communication requests} | |
\label{sec:pt2pt-persistent} | |
Often a communication with the same argument list is repeatedly | |
executed within the inner loop of a parallel computation. In such a | |
situation, it may be possible to optimize the communication by | |
-binding the list of communication arguments to a \mpiterm{persistent} communication | |
+binding the list of communication arguments to a \mpitermdefni{persistent} communication | |
request once and, then, repeatedly using | |
the request to initiate and complete messages. The | |
persistent request thus created can be thought of as a | |
@@ -3687,9 +3716,9 @@ | |
rule is followed, then the functions | |
described in this section will be invoked | |
in a sequence of the form, | |
-\( | |
+\[ | |
\textbf{Create \ (Start \ Complete)$^*$ \ Free} | |
-\) | |
+\] | |
where | |
$*$ indicates zero or more repetitions. | |
If the same communication object is used in several concurrent | |
@@ -3725,9 +3754,10 @@ | |
\section{Send-Receive} | |
+\mpitermtitleindex{send-receive} | |
\label{sec:pt2pt-sendrecv} | |
-The \mpiterm{send-receive} operations combine in one call the sending of a | |
+The \mpitermdefni{send-receive} operations combine in one call the sending of a | |
message to one destination and the receiving of another message, from | |
another process. The two (source and destination) are possibly the same. | |
A send-receive operation is | |
@@ -3774,7 +3804,7 @@ | |
\cdeclindex{MPI\_Status}% | |
\mpibind{MPI\_Sendrecv(const~void~*sendbuf, int~sendcount, MPI\_Datatype~sendtype, int~dest, int~sendtag, void~*recvbuf, int~recvcount, MPI\_Datatype~recvtype, int~source, int~recvtag, MPI\_Comm~comm, MPI\_Status~*status)} | |
-\mpifnewbind{MPI\_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status, ierror) \fargs TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf \\ TYPE(*), DIMENSION(..) :: recvbuf \\ INTEGER, INTENT(IN) :: sendcount, dest, sendtag, recvcount, source, recvtag \\ TYPE(MPI\_Datatype), INTENT(IN) :: sendtype, recvtype \\ TYPE(MPI\_Comm), INTENT(IN) :: comm \\ TYPE(MPI\_Status) :: status \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
+\mpifnewbind{MPI\_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status, ierror) \fargs TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf \\ TYPE(*), DIMENSION(..) :: recvbuf \\ INTEGER, INTENT(IN) :: sendcount, dest, sendtag, recvcount, source,\\\ \ \ \ recvtag \\ TYPE(MPI\_Datatype), INTENT(IN) :: sendtype, recvtype \\ TYPE(MPI\_Comm), INTENT(IN) :: comm \\ TYPE(MPI\_Status) :: status \\ INTEGER, OPTIONAL, INTENT(OUT) :: ierror} | |
\mpifbind{MPI\_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR)\fargs <type> SENDBUF(*), RECVBUF(*) \\ INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE,\\\ \ \ \ SOURCE, RECVTAG, COMM, STATUS(MPI\_STATUS\_SIZE), IERROR} | |
\mpicppemptybind{MPI::Comm::Sendrecv(const void~*sendbuf, int~sendcount, const MPI::Datatype\&~sendtype, int~dest, int~sendtag, void~*recvbuf, int~recvcount, const~MPI::Datatype\&~recvtype, int~source, int~recvtag, MPI::Status\&~status) const}{void} | |
\mpicppemptybind{MPI::Comm::Sendrecv(const void~*sendbuf, int~sendcount, const MPI::Datatype\&~sendtype, int~dest, int~sendtag, void~*recvbuf, int~recvcount, const~MPI::Datatype\&~recvtype, int~source, int~recvtag) const}{void} | |
@@ -3826,6 +3856,7 @@ | |
\end{implementors} | |
\section{Null Processes} | |
+\mpitermtitleindex{null processes} | |
\label{sec:pt2pt-nullproc} | |
In many instances, it is convenient to specify a ``dummy'' source or | |
Index: chap-binding/binding-2.tex | |
=================================================================== | |
--- chap-binding/binding-2.tex (revision 2030) | |
+++ chap-binding/binding-2.tex (working copy) | |
@@ -1,8 +1,11 @@ | |
\chapter{Language Bindings} | |
+\mpitermtitleindex{language binding} | |
\label{sec:binding-2} | |
\label{chap:binding-2} | |
\section{Fortran Support} | |
+\mpitermtitleindex{Fortran support} | |
+\mpitermtitleindex{Fortran -- language binding} | |
\subsection{Overview} | |
\label{f90:overview} | |
@@ -124,6 +127,7 @@ | |
Section~\ref{sec:f90-problems:comparison-with-C} compares the Fortran problems with those in C. | |
\subsection{Fortran Support Through the \texorpdfstring{\code{mpi\_f08}}{mpi\_f08} Module} | |
+\mpitermtitleindex{mpi\_f08 module -- Fortran support} | |
\label{f90:mpif08} | |
An \MPI/ implementation providing a Fortran interface must | |
@@ -290,6 +294,7 @@ | |
\end{rationale} | |