نتایج جستجو برای: parallel workstation

تعداد نتایج: 231183  

1998
Robert Haimes Kirk E. Jordan

This paper discusses the combined use of MPI and PVM in the parallel visualization tool kit, pV3. The implementation provides for e cient co-processed, distributed parallel visualization of large-scale 3D time dependent simulations. The primary goals of pV3 include the ability; to handle large-scale transient 3D simulations, to take full advantage of available hardware encompassing both paralle...

1994
Yoshio Tanaka Shogo Matsui Atsushi Maeda Masakazu Nakanishi

Garbage collection (GC) normally causes pause of execution. Parallel GC has a great potential for real time (non-disruptive) processing. A traditional parallel mark and sweep GC algorithm has, however, well known disadvantages. In this paper, we propose a new GC scheme called Partial Marking GC (PMGC) which is a valiant of generational GC. We implemented a Lisp interpreter with PMGC on a genera...

2003
Brian Greskamp

In a closely watched match in 1997, the parallel chess supercomputer Deep Blue defeated then world champion Garry Kasparov 3 12 to 2 1 2 . The fruits of Moore’s law combined with steadily improving chess algorithms now allow programs such as Deep Fritz to challenge human grandmasters while running on nothing more than a commodity 4-way SMP workstation. This paper provides a short overview of th...

1998
Bjarne Geir Herland Michael Eberl Hermann Hellwagner

This paper describes the design of a common message passing layer for implementing both MPI and PVM over the SCI interconnect in a workstation or PC cluster. The design is focused at obtaining low latency. The message layer encapsulates all necessary knowledge of the underlying interconnect and operating system. Yet, we claim that it can be used to implement such diierent message passing librar...

1999
Bernd Jung Hans-Peter Lenhof Peter Müller Christine Rüb

We have developed and implemented parallel algorithms for the molecular dynamics simulation of synthetic polymer chains. Our package has been specifically designed for distributed–memory machines like the widespread Cray T3E, but it can also be used on clusters of workstations and on a single workstation (i.e., it runs also sequentially). The target molecules are single synthetic polymer chains...

2000
Masato Oguchi Masaru Kitsuregawa

Personal computer/Workstation (PC/WS) clusters are promising candidates for future high performance computers, because of their good scalability and cost performance ratio. Data intensive applications, such as data mining and ad hoc query processing in databases, are considered very important for massively parallel processors, as well as conventional scientific calculations. Thus, investigating...

Journal: :IEEE Micro 1995
Ruby B. Lee

A minimalistic set of multimedia instructions introduced into PA-RISC microprocessors implements SIMD-MIMD parallelism with insignificant changes to the underlying microprocessor. Thus, a software video decoder attains MPEG video and audio decompression and playback at real-time rates of 30 frames per second, on an entry-level workstation. Our general-purpose parallel subword hxstructions can a...

1997
Hsin-Chou Chi Chih-Tsung Tang

Interconnection networks with irregular topologies (or irregular networks) are ideal communication subsystems for workstation clusters owing to their incremental scalability. While many deadlock-free routing schemes have been proposed for regular networks such as mesh, torus, and hypercube, they cannot be applied in irregular networks. This paper presents a cost-effective routing architecture, ...

2002
M. El-Shenawee C. Rappaport D. Jiang W. Meleis D. Kaeli

The computational solution of large-scale linear systems of equations necessitates the use of fast algorithms but is also greatly enhanced by employing parallelization techniques. The objective of this work is to demonstrate the speedup achieved by the MPI (Message Passing Interface) parallel implementation of the Steepest Descent Fast Multipole Method (SDFMM). Although this algorithm has alrea...

2007
Clemens Grelck Frank Penczek Kai Trojahner

We present the design and implementation of CAOS, a domainspecific high-level programming language for the parallel simulation of extended cellular automata. CAOS allows scientists to specify complex simulations with limited programming skills and effort. Yet the CAOS compiler generates efficiently executable code that automatically harnesses the potential of contemporary multi-core processors,...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید