نتایج جستجو برای: کتابخانه mpi

تعداد نتایج: 24871  

2005
Horacio González-Vélez

This is an initial case on exploring the application of algorithmic skeletons to abstract low-level interprocess communication in MPI. The main purpose is intended to illustrate the competitive performance demonstrated by the skeletal approach when compared to utilization of the pure MPI, whilst providing an abstraction with reusability advantages. This initial work involves the implementation ...

2004
W. Gropp

We describe an architecture f o r the runtime environment for parallel applications as prelude to describing how parallel application might interface to their environment in a portable way. We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic process management, including spawning of new processes b y a running application and connection to existing pro...

2002
Tomas Plachetka

PVM (the current version 3.4) as well as many current MPI implementations force application programmers to use active polling (also known as busy waiting) in larger parallel programs. This serious problem is related to thread-unsafety of these communication libraries. While the MPI specification is very careful in this respect, the implementations are not. We present a new mechanism of interrup...

Journal: :J. Parallel Distrib. Comput. 2005
Justin Gus Hurwitz Wu-chun Feng

Recent work with 10-Gigabit (10GbE) network adapters has demonstrated good performance in TCP/IP-based localand wide-area networks (LANs and WANs). In the present work we present an evaluation of host-based 10GbE adapters in a system-area network (SAN) in support of a cluster. This evaluation focuses on the performance of the message-passing interface (MPI) when running over a 10GbE interconnec...

1995
William Gropp Ewing L. Lusk

We describe an architecture for the runtime environment for parallel applications as prelude to describing how parallel application might interface to their environment in a portable way. We propose extensions to the Message-Passing Interface (MPI) Standard that provide for dynamic process management, including spawning of new processes by a running application and connection to existing proces...

1999
Rolf Rabenseifner

This paper presents an automatic counter instrumentation and pro ling module added to the MPI library on Cray T3E and SGI Origin2000 systems. A detailed summary of the hardware performance counters and the MPI calls of any MPI production program is gathered during execution and written in MPI Finalize on a special syslog le. The user can get the same information in a di erent le. Statistical su...

2007
Rajeev Thakur William Gropp

MPI (the Message Passing Interface) continues to be the dominant programming model for parallel machines of all sizes, from small Linux clusters to the largest parallel supercomputers such as IBM Blue Gene/L and Cray XT3. Although the MPI standard was released more than 10 years ago and a number of implementations of MPI are available from both vendors and research groups, MPI implementations s...

2014
Sadaf Alam Ugo Varetto

Recently MPI implementations have been extended to support accelerator devices, Intel Many Integrated Core (MIC) and nVidia GPU. This has been accomplished by changes to different levels of the software stacks and MPI implementations. In order to evaluate performance and scalability of accelerator aware MPI libraries, we developed portable micro-benchmarks to indentify factors that influence ef...

1998
Delphine Stéphanie Goujon Martial Michel Jasper Peeters Judith Ellen Devaney

This article describes two software tools, AutoMap and Au-toLink, that facilitate the use of data-structures in MPI. AutoMap is a program that parses a le of user-deened data-structures and generates new MPI types out of basic and previously deened MPI data-types. Our software tool automatically handles specialized error checking related to memory mapping. AutoLink is an MPI library that allows...

1999
Edgar Gabriel Michael Resch

| This paper presents an implementation of the Message Passing Interface called PACX-MPI. The major goal of the library is to support heterogeneous metacomputing for MPI applications by clustering MPP's and PVP's. The key concept of the library is a daemon-concept. We will focus in this paper on two aspects of this library. First we will show the importance of the usage of optimized algorithms ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید