In this course we’ll discuss two separate (but closely ...



CPSC 424: Parallel Programming. Robert Bjornson, Nicholas Carriero, David Gelernter. T-Th, 1-2:15. (“teacher of record”= Gelernter)

Our real topic is “coordinated programming” or “software ensembles”—we’re interested in all programs that incorporate many asynchronous communicating processes. These might be “parallel programs” (conventionally, software ensembles designed for high performance), “distributed programs” (where processes run on physically widespread nodes; a client & server on the Web is an ad hoc “distributed program”) and ensembles that use multiple asynchronous processes in the interests of clarity, modularity, the gluing-together of heterogeneous modules (in different languages e.g.), etc.

The course covers two separate but closely-related topics. (1) parallel, distributed and “coordinated” programs in principle—how and why such programs are developed; how and why the field itself developed historically.

(2) How to write such programs in practice. The course will be taught by Robert Bjornson, Nicholas Carriero and David Gelernter; Bjornson and Carriero will focus mainly on (2), Gelernter on (1).

For the hands-on part of the course, students will download several systems and develop a series of programs on their own machines. A true parallel or distributed program executes on many processors simultaneously, but such programs are logically identical to others in which many processes or tasks are multiplexed on one processor.

Assignments will be based mainly on Python together with Network Spaces (NWS), with the help of Twisted. Python is a high-level interpreted system, a “rapid application development” environment. Network Spaces is a comparably high level coordination language; NWS, that is, makes it possible to write distributed and parallel applications using Python (and other programming languages). NWS is a light-weight, open source system that supports multi-language applications & includes a Web interface for program visualization & monitoring. Twisted is a network programming framework written in Python that makes it convenient to create and support many concurrently-executing processes. All these environments are compatible with PC, Mac and Linux.

The course has 4 phases—1 and 4 short, 2 and 3 longer. 1: Introduction. 2: Design principles in parallel and distributed programming. 3: Parallel and distributed programming in practice. 4: Parallel languages, hardware platforms, adaptive parallelism, parallel web servers and related topics: history, current practice and possible futures. During phase 2, lectures of 55-60 minutes will be followed by practical topics for 15-20 minutes. During 3, lectures of 55-60 minutes will be followed by phase 4 topics for 15-20 minutes. In other words: the first part of phase 3 runs concurrently with phase 2; the first part of phase 4 runs concurrently with phase 3. Parallelism in action.

Prerequisite: students must be competent programmers.

Reading for part 2 will be from Carriero and Gelernter, How to write parallel programs, which will be available either for copying or downloading. Reading for part 3 (users’ and programming manuals) will be downloaded with the systems themselves.

Requirements: a series of programming assignments using Python-NWS and Twisted and a final exam.

Schedule

Jan 16, 18: Introduction. (Our only goal on 1/16 is to describe the course and answer questions; on 1/18, introduction to the subject matter.)

Jan 23–Feb 15. Main focus: design principles for parallel and distributed programming; principles of performance analysis in parallel programming. (When you convert an ordinary serial “uniprocessor” program for parallel execution on many processors, how do you measure performance gain?) Secondary focus: introduction to Python and Twisted, including practical details (where to find the systems & documentation, etc). We plan to introduce a simple programming exercise & follow it during the course from initial serial version through several types of parallel solution.

Feb 20-April 12 (NB 2 wks off in March). Main focus: The NWS coordination language; implementing the design frameworks discussed earlier; the technique called “do it yourself data parallelism”; analysis of real-world parallel apps; a look at message-passing coordination languages (such as MPI and PVM); point-to-point and client-server software architectures. Secondary focus: the evolution of distributed and parallel programming languages, multi-processor machines and the internet and Web.

April 17-26 “Adaptive parallelism” (apps that gain and lose processors as the run); parallel web servers and parallel large-scale search; future directions (including the “empty computer model” in which all digital assets are stored in distributed form on multiple internet servers, and others).

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download