Abstract: High-performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from EM simulations is oftentimes cumbersome leading to large storage requirements. In this paper, we proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. We solve this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.
Abstract: The main goal of this article is to describe the online
flood monitoring and prediction system Floreon+ primarily developed
for the Moravian-Silesian region in the Czech Republic and the basic
process it uses for running automatic rainfall-runoff and
hydrodynamic simulations along with their calibration and
uncertainty modeling. It takes a long time to execute such process
sequentially, which is not acceptable in the online scenario, so the use
of a high performance computing environment is proposed for all
parts of the process to shorten their duration. Finally, a case study on
the Ostravice River catchment is presented that shows actual
durations and their gain from the parallel implementation.
Abstract: Cloud virtualization technologies are becoming more
and more prevalent, cloud users usually encounter the problem of how
to access to the virtualized remote desktops easily over the web
without requiring the installation of special clients. To resolve this
issue, we took advantage of the HTML5 technology and developed
web-based remote desktop. It permits users to access the terminal
which running in our cloud platform from anywhere. We implemented
a sketch of web interface following the cloud computing concept that
seeks to enable collaboration and communication among users for
high performance computing. Given the development of remote
desktop virtualization, it allows to shift the user’s desktop from the
traditional PC environment to the cloud platform, which is stored on a
remote virtual machine rather than locally. This proposed effort has
the potential to positively provide an efficient, resilience and elastic
environment for online cloud service. This is also made possible by the
low administrative costs as well as relatively inexpensive end-user
terminals and reduced energy expenses.
Abstract: Virtualization and high performance computing have been discussed from a performance perspective in recent publications. We present and discuss a flexible and efficient approach to the management of virtual clusters. A virtual machine management tool is extended to function as a fabric for cluster deployment and management. We show how features such as saving the state of a running cluster can be used to avoid disruption. We also compare our approach to the traditional methods of cluster deployment and present benchmarks which illustrate the efficiency of our approach.
Abstract: I/O workload is a critical and important factor to
analyze I/O pattern and file system performance. However tracing I/O
operations on the fly distributed parallel file system is non-trivial due
to collection overhead and a large volume of data. In this paper, we
design and implement a parallel file system logging method for high
performance computing using shared memory-based multi-layer
scheme. It minimizes the overhead with reduced logging operation
response time and provides efficient post-processing scheme through
shared memory. Separated logging server can collect sequential logs
from multiple clients in a cluster through packet communication.
Implementation and evaluation result shows low overhead and high
scalability of this architecture for high performance parallel logging
analysis.
Abstract: In-core memory requirement is a bottleneck in solving
large three dimensional Navier-Stokes finite element problem
formulations using sparse direct solvers. Out-of-core solution
strategy is a viable alternative to reduce the in-core memory
requirements while solving large scale problems. This study
evaluates the performance of various out-of-core sequential solvers
based on multifrontal or supernodal techniques in the context of
finite element formulations for three dimensional problems on a
Windows platform. Here three different solvers, HSL_MA78,
MUMPS and PARDISO are compared. The performance of these
solvers is evaluated on a 64-bit machine with 16GB RAM for finite
element formulation of flow through a rectangular channel. It is
observed that using out-of-core PARDISO solver, relatively large
problems can be solved. The implementation of Newton and
modified Newton's iteration is also discussed.
Abstract: In the past few years there is a change in the view of high performance applications and parallel computing. Initially such applications were targeted towards dedicated parallel machines. Recently trend is changing towards building meta-applications composed of several modules that exploit heterogeneous platforms and employ hybrid forms of parallelism. The aim of this paper is to propose a model of virtual parallel computing. Virtual parallel computing system provides a flexible object oriented software framework that makes it easy for programmers to write various parallel applications.
Abstract: Grid computing is a high performance computing
environment to solve larger scale computational applications. Grid
computing contains resource management, job scheduling, security
problems, information management and so on. Job scheduling is a
fundamental and important issue in achieving high performance in
grid computing systems. However, it is a big challenge to design an
efficient scheduler and its implementation. In Grid Computing, there
is a need of further improvement in Job Scheduling algorithm to
schedule the light-weight or small jobs into a coarse-grained or
group of jobs, which will reduce the communication time,
processing time and enhance resource utilization. This Grouping
strategy considers the processing power, memory-size and
bandwidth requirements of each job to realize the real grid system.
The experimental results demonstrate that the proposed scheduling
algorithm efficiently reduces the processing time of jobs in
comparison to others.
Abstract: This paper presents the benchmarking results and
performance evaluation of differentclustersbuilt atthe National Center
for High-Performance Computingin Taiwan. Performance of
processor, memory subsystem andinterconnect is a critical factor in the
overall performance of high performance computing platforms. The
evaluation compares different system architecture and software
platforms. Most supercomputer used HPL to benchmark their system
performance, in accordance with the requirement of the TOP500 List.
In this paper we consider system memory access factors that affect
benchmark performance, such as processor and memory
performance.We hope these works will provide useful information for
future development and construct cluster system.