Anonymous

Changes

From Grad Wiki
no edit summary
# notes
# first assignment MPI next week
# accounts emailed next week for UMIACS bug cluster and for SMP
# check readings page to see question sending assignment:w
# 2-4 questions
# 6PM day before lecture

= PVM =
== provide a simple, free, portable parallel environment ==
== run on everything ==
: parallel hardware: SMP, MPPs, Vector machines
: network of workstations: ATM, ethernet,
::* UNIX machines and PCs running win32 API
: works on a heterogenous collection of machines
::* handles type conversion as needed

== provides two things ==
: message pasing library
::* point-to-point messages
::* synchronization: barriers, reductions
: OS support
::* process creation (pvm_spawn)


= PVM environment (UNIX) =
== one PVMD (daemon) per machine ==
: all processes communicate through pvmd (by default)

== any number of application processes per node ==

= PVM message passing =
== all messages have tags ==
: an integer to identify the message
: defined by the user

== messages are constructed, then sent ==
: pvm_pk{int,char,float}(*var, count, stride)
: pvm_unpk{int,char,float} to unpack

== all processes are named based on task ids (tids) ==
: local/remote processes are the same

== primary message passing functions ==
: pvm_send(tid,tag)
: pvm_recv(tid,tag) # tid to receive from, could be anywhere


= PVM process control =
== creating a process ==
: pvm_spawn(task,argv,flag,where,ntask,tids)
: task is name of program to start
: flag and where provide control of where tasks are started
: ntask determines how many copies are started
: program must be installed on each target machine
: returns number of tasks actually started

== ending a task ==
: pvm_exit
: does not exit the process, just the PVM machine

== info functions ==
: pvm_mytid() get the process task id


= PVM group operations =
== group is the unit of communication ==
: a collection of one or more processes
: processes join group with pvm_joingroup("<group name>")
: each process in the group has a unique id
::* pvm_gettid("<group name>")

== barrier ==
: can involve a subset of the processes in the group
: pvm_barrier("<group name>"), count)

== reduction operations ==
: pvm_reduce( void (*func)(), void *data, int count, int datatype, int msgtag, char *group, int rootinst)
::* result is returned to rootinst node
::* does not block (does not wait until root has the return value, just indicate valid buffer value)
: pre-defined funcs: PvmMin, PvmMax, PvmSum, PvmProduct


= PVM performance issues =
== messages have to go through PVMD ==
: can use direct route option to prevent this problem

== packing messages ==
: semantics imply a copy (at least)
: extra function call to pack messages

== heterogenous support ==
: information is sent in machine independent format
: has a short circuit option for known homogenous comm.
::* passes data in native format then

= sample PVM program =
fragment from main
<pre>
myGroupNum = pvm_joingroup("ping-pong");
mytid = pvm_mytid();
if (myGroupNum == 0) {
pvm_catchout(stdout);
okSpawn = pvm_spawn(MYNAME, argv, 0, "", 1, &friendTid);
if (okSpawn != 1) {
printf("can't spawn");
pvm_exit();
exit(1);
}
} else {
friendTid = pvm_parent();
tids[0] = friendTid;
tids[1] = mytid;
}
pvm_barrier("ping-pong", 2);

if (myGroupNum == 0) {
for (i = 0; i < MESSAGESIZE; i++) {
messsage[i] = '1'l
}
}

for (i = 0; i<ITERATIONS; i++) {
if( myGroupNum == 0) {
pvm_initsend(PvmDataDefault);
pvm_pkint(message,MESSAGESIZE, 1); // stride 1 indicates no gap
pvm_send(friendTid, msgid);
pvm_recv(friendTid, msgid);
pvm_upkint(message,MESSAGESIZE,1);
}
else {
pvm_recv(friendTid,msgid);
pvm_upkint(message,MESSAGESIZE, 1);
pvm_initsend(PvmDataDefault);
pvm_pkint(message, MESSAGESIZE, 1);
pvm_send(friendTid,msgid);
}
}
pvm_exit();
exit(0);
</pre>
21

edits