Block Mapping - A Compiler Driven I/O Management study
Block Mapping - A Compiler Driven I/O Management Study
Wendy Zhang
Department of Computer Information Systems
Southern University at New Orleans
New Orleans, LA, U.S.A.
Ernst L. Leiss
Computer Science Department
University of Houston
Houston, TX, U.S.A
Most scientific programs have large input and output data sets that require out-of-core programming or use virtual memory management (VMM). VMM is not an affective approach because it results easily in substantial performance reduction. In contrast, compiler driven I/O management will allow a program’s data sets to be retrieved in parts, called blocks. Out-of-core programming defines the partitions of the data sets and the control of I/O. This study shows that block mapping I/O management may improve the performance of out-of-core problems by one order of magnitude or more. The study also shows that the block size chosen to partition the data sets plays an important role in compiler driven I/O management.
KEY WORDS: I/O management, block mapping, VMM, out-of-core, compiler
1. Introduction
Most significant scientific programs’ data sets exceed the available memory; thus if one wants to avoid virtual memory management (which often suffers a severe performance penalty), these programs must be executed out-of-core, an approach that requires the explicit movement of data between main memory and secondary storage.
In the last twenty years, the processor speed has increased faster than memory speed and disk access speed. Thus, it is important to use the memory hierarchy efficiently and construct I/O minimal programs.
Virtual Memory Management (VMM) system combines special purpose memory management hardware and operating system software to provide the illusion of an unlimited amount of main memory [1]. VMM system uses a static policy for storage management. As pointed in [3], this uniform page automatic transfer method is convenient to use, but the programmers are no longer fully in control of the program. VMM systems perform poorly on many scientific applications [6].
COMANCHE (an acronym for COmpiler MANaged caCHE) is a compiler run time I/O management system [6]. It is a compiler combined with a user level runtime system and replaces for applications with out-of-core data sets. More details about COMANCHE will be given in Section 2.
Blocking or tiling [4][8] is a well-known technique that improves the data locality of numerical algorithms. Blocking can be used to achieve locality for different levels of memory hierarchy. [7][2] describe a locality optimization algorithm that applies a combination of loop interchanges, skewing, reversal, and tiling to improve the data locality of loop nests. [3] describes an I/O minimal algorithm where for a given amount of space the number of retrievals of blocks from external memory is minimal. [3] also points that the notion of I/O minimal depends on the block size available in the memory.
Out-of-core matrix multiplication is used in this paper to study the importance of blocking. An I/O minimal algorithm for matrix multiplication is sketched which forms the basis of this work. In Section 3, we discuss the minimal amount of space required to carry out the computation as well as the I/O scheme used with COMANCHE system.
In testing the proposed I/O minimal matrix multiplication algorithm, it has become clear that the size of the blocks for the sub-matrices plays a very important role. In Section 4, we will compare the performance for different block sizes. In Section 5, we draw conclusions from the results of our study.
2. COMANCHE
Comanche [6] is a runtime system. It is written in C and uses the standard I/O library calls – fopen, fclose, fread, fwrite, and fseek. The current Comanche system is running under the RedHat 5.0 Linux operating system on a PC with a PentiumPro (160Mhz or faster) processor and 64MB or more RAM.
The standard entity in the Comanche runtime is a two-dimensional array. Higher dimensions can be supported, or else the accesses can be translated to two-dimensional accesses. Data are assumed to be in row major layout on disk. The ideal array is a square matrix.
Comanche is based on a compiler-managed cache where the main memory accommodates the cache. The compiler has unique information about the data needs of the program, the size and shape of arrays, and the access patterns of the program. Note that it is this information that typically is not available to the operating system, which is customarily charged with the management of I/O calls. The out-of-core program is restructured to move data automatically between main memory and disk.
There are two structures declared in the Comanche system. One is a block structure used in buffers to keep a block of data.
typedef structure block_s
{
double *data;
int flag;
int refcnt;
int rowindex;
int columnindex;
int nrow;
int ncol;
int blockno;
} Block;
The other structure is a matrix structure. This structure holds the information of the matrix on disk and has buffers to hold several blocks in memory.
typedef struct array_s
{
int nblocks;
int nelems
int bsize;
int elesize;
FILE *fp;
long offset;
char *name;
int *map;
int nbuffers;
Block **buffers;
int victim;
int flag;
int total_mem;
} Matrix;
Besides these structures, there are several functions in Comanche. Two major functions are attach_block and release_block which are used to perform the block mapping.
When the data set is too large to fit into memory, VMM will map the array into a one-dimensional vector and then that vector is cut up into blocks. Comanche will take a row of an array and cut it up into blocks, so that a block only corresponds to a single row or a sub-matrix, and cuts it up into blocks. It is under the control of the resulting out-of-core program. The attach_block and release_block functions are used to improve the data locality to achieve better performance than VMM.
The attach_block and release_block functions tell the runtime system that the block is to be mapped into memory; then an address to the cached data is to be returned. The runtime system will not reclaim memory that has been attached as long as it remains attached. It does not matter how much time has passed since the block was last referenced. The out-of-core program manages the duration of mapped data and ensures that the number of attach operations will not over-fill the available memory before the data’s subsequent release.
Read-only, write-only, and read-write flags are added to the functions. If the flag is write-only, attach_block allocates the buffer space for that particular block. There is no need to read data from disk. If the flag is write the runtime system must write the block back to disk when it is released.
3. Block matrix multiplication
The object of our study of the Comanche run time system is matrix multiplication. Assume that there are three matrices A, B, and C of size (N, N) on disk. We will multiply A by B and store the result in matrix C. To store matrices A, B, and C in memory requires 3N2 space. If N is large, the total amount of main memory available is less than the needed space of 3N2 . Since the data cannot be entirely loaded into memory, the problem becomes out-of-core.
The traditional way of coding this in C is something like this:
for (i=0; i ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- financial management study notes
- business management study guides
- business management study guide pdf
- file i o in java
- case management study guide pdf
- american case management study guide
- i want to study law
- principles of management study material
- nursing case management study guide
- case management study guide pass
- management study guide pdf
- management study guide