The following subroutines support the initialization and termination of data parallel tasks. They must be called when the data parallel tasks are invoked.
subroutine HPF_TASK_INIT ()
subroutine HPF_TASK_EXIT ()
The call of these routines is not mandatory but might assert additional runtime checks. They could verify at runtime that the tasks of the current context are really mapped to disjoint processor subgroups. Furthermore, at the end it could be verified that there are no pending messages between the tasks.
The following subroutines return the size (number of data parallel tasks in the current context) and the rank of the calling task ( ).
subroutine HPF_TASK_SIZE (size) integer, intent(out) :: size subroutine HPF_TASK_RANK (rank) integer, intent(out) :: rank
For the sending of data (scalars, arrays or array sections), the task identifier of the target task must be specified. The tag argument is still ignored.
subroutine HPF_SEND (data, dest, tag, order) <type>, dimension <>, intent(in) :: data integer, intent (in) :: dest integer, intent (in), optional :: tag integer, dimension(:), intent(in), optional :: order
The ORDER argument must be of type integer, rank one, and of size equal to the rank of DATA. Its elements must be a permutation of , where is the the rank of the data. If the order argument is available, the axes of the data will be permuted.
call HPF_SEND (data=arr, dest=pid, order = (/1, 3, 2/)) call HPF_SEND (data=TRANSPOSE (arr, order = (/1, 3, 2/)), dest=pid)
The receiving of data is similiar. Every send must have a matching receive. The tag argument is ignored.
subroutine HPF_RECV (data, source, tag) <type>, intent(out) :: data integer, intent (in), optional :: source integer, intent (in), optional :: tag
The source argument is optional. By this way, it is possible to receive a message from an arbitrary task.
The implementation of point-to-point communication between data parallel tasks results in communication between the processors of the two processor subgroups that are involved. If distributed data is exchanged, it is necessary to exchange the mapping information (descriptor exchange).
The following restrictions are given:
The following routines are helpful to avoid the descriptor exchange when the same schedule is used several times.
subroutine HPF_SEND_INIT (data, dest, request, tag, order) integer, intent (in) :: dest <type>, intent (in) :: data integer, intent (out) :: request integer, intent (in), optional :: tag integer, dimension(:), intent(in), optional :: order subroutine HPF_RECV_INIT (data, source, request, tag) integer, intent (in) :: source <type>, intent (out) :: data integer, intent (out) :: request integer, intent (in), optional :: tag subroutine HPF_TASK_COMM (request) integer, intent (in) :: request
Note: The routines HPF_xxx_INIT and HPF_TASK_COMM are not available in local subroutines.
Collective communication like in MPI might also be useful for HPF tasks. Especially the broadcast of data and the barrier proved to be very useful. It should be observed that the context of these operations is given by the current task context. A barrier synchronizes the tasks of the current context, not the processors within this task.
subroutine HPF_BCAST (data, root) <type>, intent (inout) :: data integer, intent (in), optional :: root subroutine HPF_BARRIER ()
Sending and receiving of distributed data must be assumed to be blocking. When executing shift operations across a chain of tasks or when two tasks are exchanging data, one needs to order the sends and receives correctly (e.g. even tasks send, then receive, odd tasks receive first, then send) so as to prevent cyclic dependencies that may lead to deadlock. When using a send-receive routine, the system takes care of these issues.
subroutine HPF_SEND_RECV (send_data, dest, recv_data, source, send_tag, recv_tag, order) integer, intent (in) :: dest, source <type>, intent(in) :: send_data <type>, intent(out) :: recv_data integer, intent (in), optional :: send_tag integer, intent (in), optional :: recv_tag integer, dimension(:), intent(in), optional :: order