نتایج جستجو برای: parallelization

تعداد نتایج: 7666  

2006
Ali Cevahir Cevdet Aykanat Ata Turk Berkant Barla Cambazoglu

A power method formulation, which efficiently handles the problem of dangling pages, is investigated for parallelization of PageRank computation. Hypergraph-partitioning-based sparse matrix partitioning methods can be successfully used for efficient parallelization. However, the preprocessing overhead due to hypergraph partitioning, which must be repeated often due to the evolving nature of the...

Journal: :Journal of Systems and Software 2017
Olaf Neugebauer Michael Engel Peter Marwedel

Future low-end embedded systems will make an increased use of heterogeneous MPSoCs. To utilize these systems efficiently, methods and tools are required that support the extraction and implementation of parallelism typically found in embedded applications. Ideally, large amounts of existing legacy code should be reused and ported to these new systems. Existing parallelization infrastructures, h...

2013
Abdelhakim AitZai Mourad Boudhar Adel Dabah

In this paper, we studied the parallelization of an exact method to solve the job shop scheduling problem with blocking JSB. We used a modeling based on graph theory exploiting the alternative graphs. We have proposed an original parallelization technique for performing a parallel computation in the various branches of the search tree. This technique is implemented on computers network, where t...

Journal: :J. Heuristics 2002
Ahmad A. Al-Yamani Sadiq M. Sait Habib Youssef Hassan Barada

In this paper, we present the parallelization of tabu search on a network of workstations using PVM. Two parallelization strategies are integrated: functional decomposition strategy and multi-search threads strategy. In addition, domain decomposition strategy is implemented probabilistically. The performance of each strategy is observed and analyzed. The goal of parallelization is to speedup th...

1993
C. L. McCreary J. J. Thompson D. H. Gill T. J. Smith

Automated parallelization of source code is a goal on which many researchers in parallel computing have focused. The increasing availability of parallel computers, the difficulty of creating good parallel programs, and the vast amount of existing serial source code all contribute to the need for automated means of parallelization. This paper centers on the issues of partitioning and scheduling ...

This study develops and analyzes preconditioned Krylov subspace methods to solve linear systems arising from discretization of the time-independent space-fractional models. First, we apply shifted Grunwald formulas to obtain a stable finite difference approximation to fractional advection-diffusion equations. Then, we employee two preconditioned iterative methods, namely, the preconditioned gen...

Journal: :CoRR 2011
Alaa Ismail Elnashar

Running parallel applications requires special and expensive processing resources to obtain the required results within a reasonable time. Before parallelizing serial applications, some analysis is recommended to be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss the issue of speed up gained from parallelization using Message Passing Interface...

1997
Marcus Dormanns Walter Sprangers Hubert Ertl Thomas Bemmerl

We describe a programming interface for parallel computing on NUMA (NonUniform Memory Access) shared memory machines. Although the interest in this architecture is rapidly growing and more and more hardware manufacturers offer products of this type, there is still a lack in parallelization support. We developed SMI, the Shared Memory Interface, and implemented it as a library on an SCI-coupled ...

2011
Benoît Pradelle Alain Ketterlin Philippe Clauss

This paper describes a system that applies automatic parallelization techniques to binary code. The system works by raising raw executable code to an intermediate representation that exhibits all memory accesses and relevant register definitions, but outlines detailed computations that are not relevant for parallelization. It then uses an off-the-shelf polyhedral parallelizer, first applying ap...

Journal: :IJHPCA 2003
Rolf Rabenseifner Gerhard Wellein

Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distributed memory parallelization on the node inter-connect with the shared memory parallelization inside of each node. The hybrid MPI+OpenMP programming model is compared with pure MPI, compiler based parallelization, and other parallel programming models on hybrid architectures. The paper focuses on b...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید