More information is attainable faster.

PositionCommunication

A message-passing library that makes it possible to extract optimum performance from both workstation and personal computer clusters, as well as from large massively parallel supercomputers, has been developed by researchers at the U.S. Department of Energy's Ames (Iowa) Laboratory. The library, called MP_Lite, supports and enhances the basic capabilities that most software programs require to communicate between computers.

Although MP_Lite could be scaled up easily, its objective is not to provide all the capabilities of the full message-passing interface standard. MPI is a widely used model that standardizes the syntax and functionality for message-passing programs, allowing a uniform interface from the application to the underlying communication network. Parallel libraries that offer the full MPI standard ease programming problems by reducing the need to repeat work, such as defining consistent data structures, data layouts, and methods that implement key algorithms.

"Our goal with MP_Lite is to illustrate how to get better performance in a portable and user-friendly manner and to understand exactly where any inefficiencies in the MPI standard may be coming from," explains David Turner, an Ames Laboratory assistant scientist and the principal investigator working on the MP_Lite project. He notes that the MP_Lite library is smaller and much easier to work with than full MPI libraries. "It's ideal for performing message-passing research that may eventually be used to improve full MPI implementations and possibly influence the MPI standard"

Turner says that it was "mainly frustration" that led him to...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT