MPI: The Complete Reference
Next:
Contents
Contents
Introduction
The Goals of MPI
Who Should Use This Standard?
What Platforms are Targets for Implementation?
What is Included in MPI?
What is Not Included in MPI?
Version of MPI
MPI Conventions and Design Choices
Document Notation
Procedure Specification
Semantic Terms
Processes
Types of MPI Calls
Opaque Objects
Named Constants
Choice Arguments
Language Binding
Fortran 77 Binding Issues
C Binding Issues
Point-to-Point Communication
Introduction and Overview
Blocking Send and Receive Operations
Blocking Send
Send Buffer and Message Data
Message Envelope
Comments on Send
Blocking Receive
Receive Buffer
Message Selection
Return Status
Comments on Receive
Datatype Matching and Data Conversion
Type Matching Rules
Type MPI_CHARACTER
Data Conversion
Comments on Data Conversion
Semantics of Blocking Point-to-point
Buffering and Safety
Multithreading
Order
Progress
Fairness
Example - Jacobi iteration
Send-Receive
Null Processes
Nonblocking Communication
Request Objects
Posting Operations
Completion Operations
Examples
Freeing Requests
Semantics of Nonblocking Communications
Order
Progress
Fairness
Buffering and resource limitations
Comments on Semantics of Nonblocking Communications
Multiple Completions
Probe and Cancel
Persistent Communication Requests
Communication-Complete Calls with Null Request Handles
Communication Modes
Blocking Calls
Nonblocking Calls
Persistent Requests
Buffer Allocation and Usage
Model Implementation of Buffered Mode
Comments on Communication Modes
User-Defined Datatypes and Packing
Introduction
Introduction to User-Defined Datatypes
Datatype Constructors
Contiguous
Vector
Hvector
Indexed
Hindexed
Struct
Use of Derived Datatypes
Commit
Deallocation
Relation to
count
Type Matching
Message Length
Address Function
Lower-bound and Upper-bound Markers
Absolute Addresses
Pack and Unpack
Derived Datatypes vs Pack/Unpack
Collective Communications
Introduction and Overview
Operational Details
Communicator Argument
Barrier Synchronization
Broadcast
Example Using MPI_BCAST
Gather
Examples Using MPI_GATHER
Gather, Vector Variant
Examples Using MPI_GATHERV
Scatter
An Example Using MPI_SCATTER
Scatter: Vector Variant
Examples Using MPI_SCATTERV
Gather to All
An Example Using MPI_ALLGATHER
Gather to All: Vector Variant
All to All Scatter/Gather
All to All: Vector Variant
Global Reduction Operations
Reduce
Predefined Reduce Operations
MINLOC and MAXLOC
All Reduce
Reduce-Scatter
Scan
User-Defined Operations for Reduce and Scan
The Semantics of Collective Communications
Communicators
Introduction
Division of Processes
Avoiding Message Conflicts Between Modules
Extensibility by Users
Safety
Overview
Groups
Communicator
Communication Domains
Compatibility with Current Practice
Group Management
Group Accessors
Group Constructors
Group Destructors
Communicator Management
Communicator Accessors
Communicator Constructors
Communicator Destructor
Safe Parallel Libraries
Caching
Introduction
Caching Functions
Intercommunication
Introduction
Intercommunicator Accessors
Intercommunicator Constructors
Process Topologies
Introduction
Virtual Topologies
Overlapping Topologies
Embedding in MPI
Cartesian Topology Functions
Cartesian Constructor Function
Cartesian Convenience Function: MPI_DIMS_CREATE
Cartesian Inquiry Functions
Cartesian Translator Functions
Cartesian Shift Function
Cartesian Partition Function
Cartesian Low-level Functions
Graph Topology Functions
Graph Constructor Function
Graph Inquiry Functions
Graph Information Functions
Low-level Graph Functions
Topology Inquiry Functions
An Application Example
Environmental Management
Implementation Information
Environmental Inquiries
Tag Values
Host Rank
I/O Rank
Clock Synchronization
Timers and Synchronization
Initialization and Exit
Error Handling
Error Handlers
Error Codes
Interaction with Executing Environment
Independence of Basic Runtime Routines
Interaction with Signals in POSIX
The MPI Profiling Interface
Requirements
Discussion
Logic of the Design
Miscellaneous Control of Profiling
Examples
Profiler Implementation
MPI Library Implementation
Systems With Weak symbols
Systems without Weak Symbols
Complications
Multiple Counting
Linker Oddities
Multiple Levels of Interception
Conclusions
Design Issues
Why is MPI so big?
Should we be concerned about the size of MPI?
Why does MPI not guarantee buffering?
Portable Programming with MPI
Dependency on Buffering
Collective Communication and Synchronization
Ambiguous Communications and Portability
Heterogeneous Computing with MPI
MPI Implementations
Extensions to MPI
References
About this document ...
Jack Dongarra
Fri Sep 1 06:16:55 EDT 1995