Abstract: This paper describes an effective solution to the task
of a remote monitoring of super-extended objects (oil and gas
pipeline, railways, national frontier). The suggested solution is based
on the principle of simultaneously monitoring of seismoacoustic and
optical/infrared physical fields. The principle of simultaneous
monitoring of those fields is not new but in contrast to the known
solutions the suggested approach allows to control super-extended
objects with very limited operational costs. So-called C-OTDR
(Coherent Optical Time Domain Reflectometer) systems are used to
monitor the seismoacoustic field. Far-CCTV systems are used to
monitor the optical/infrared field. A simultaneous data processing
provided by both systems allows effectively detecting and classifying
target activities, which appear in the monitored objects vicinity. The
results of practical usage had shown high effectiveness of the
suggested approach.
Abstract: This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.
Abstract: A high performance computer includes a fast
processor and millions bytes of memory. During the data processing,
huge amount of information are shuffled between the memory and
processor. Because of its small size and its effectiveness speed, cache
has become a common feature of high performance computers.
Enhancing cache performance proved to be essential in the speed up
of cache-based computers. Most enhancement approaches can be
classified as either software based or hardware controlled. The
performance of the cache is quantified in terms of hit ratio or miss
ratio. In this paper, we are optimizing the cache performance based
on enhancing the cache hit ratio. The optimum cache performance is
obtained by focusing on the cache hardware modification in the way
to make a quick rejection to the missed line's tags from the hit-or
miss comparison stage, and thus a low hit time for the wanted line in
the cache is achieved. In the proposed technique which we called
Even- Odd Tabulation (EOT), the cache lines come from the main
memory into cache are classified in two types; even line's tags and
odd line's tags depending on their Least Significant Bit (LSB). This
division is exploited by EOT technique to reject the miss match line's
tags in very low time compared to the time spent by the main
comparator in the cache, giving an optimum hitting time for the
wanted cache line. The high performance of EOT technique against
the familiar mapping technique FAM is shown in the simulated
results.
Abstract: The Sensor Network consists of densely deployed
sensor nodes. Energy optimization is one of the most important
aspects of sensor application design. Data acquisition and aggregation
techniques for processing data in-network should be energy efficient.
Due to the cross-layer design, resource-limited and noisy nature
of Wireless Sensor Networks(WSNs), it is challenging to study
the performance of these systems in a realistic setting. In this
paper, we propose optimizing queries by aggregation of data and
data redundancy to reduce energy consumption without requiring
all sensed data and directed diffusion communication paradigm to
achieve power savings, robust communication and processing data
in-network. To estimate the per-node power consumption POWERTossim
mica2 energy model is used, which provides scalable and
accurate results. The performance analysis shows that the proposed
methods overcomes the existing methods in the aspects of energy
consumption in wireless sensor networks.
Abstract: In order to assess optical fiber reliability in different environmental and stress conditions series of testing are performed simulating overlapping of chemical and mechanical controlled varying factors. Each series of testing may be compared using statistical processing: i.e. Weibull plots. Due to the numerous data to treat, a software application has appeared useful to interpret selected series of experiments in function of envisaged factors. The current paper presents a software application used in the storage, modelling and interpretation of experimental data gathered from optical fibre testing. The present paper strictly deals with the software part of the project (regarding the modelling, storage and processing of user supplied data).
Abstract: Field mapping activity for an active volcano mainly in
the Torrid Zone is usually hampered by several problems such as steep
terrain and bad atmosphere conditions. In this paper we present a
simple solution for such problem by a combination Synthetic Aperture
Radar (SAR) and geostatistical methods. By this combination, we
could reduce the speckle effect from the SAR data and then estimate
roughness distribution of the pyroclastic flow deposits. The main
purpose of this study is to detect spatial distribution of new pyroclastic
flow deposits termed as P-zone accurately using the β°data from two
RADARSAT-1 SAR level-0 data. Single scene of Hyperion data and
field observation were used for cross-validation of the SAR results.
Mt. Merapi in central Java, Indonesia, was chosen as a study site and
the eruptions in May-June 2006 were examined. The P-zones were
found in the western and southern flanks. The area size and the longest
flow distance were calculated as 2.3 km2 and 6.8 km, respectively. The
grain size variation of the P-zone was mapped in detail from fine to
coarse deposits regarding the C-band wavelength of 5.6 cm.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: METIS is the Multi Element Telescope for Imaging
and Spectroscopy, a Coronagraph aboard the European Space
Agency-s Solar Orbiter Mission aimed at the observation of the solar
corona via both VIS and UV/EUV narrow-band imaging and spectroscopy. METIS, with its multi-wavelength capabilities, will
study in detail the physical processes responsible for the corona heating and the origin and properties of the slow and fast solar wind.
METIS electronics will collect and process scientific data thanks to its detectors proximity electronics, the digital front-end subsystem
electronics and the MPPU, the Main Power and Processing Unit,
hosting a space-qualified processor, memories and some rad-hard
FPGAs acting as digital controllers.This paper reports on the overall
METIS electronics architecture and data processing capabilities
conceived to address all the scientific issues as a trade-off solution between requirements and allocated resources, just before the
Preliminary Design Review as an ESA milestone in April 2012.
Abstract: As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.
Abstract: Various mechanisms providing mutual exclusion and
thread synchronization can be used to support parallel processing
within a single computer. Instead of using locks, semaphores, barriers
or other traditional approaches in this paper we focus on alternative
ways for making better use of modern multithreaded architectures
and preparing hash tables for concurrent accesses. Hash structures
will be used to demonstrate and compare two entirely different
approaches (rule based cooperation and hardware synchronization
support) to an efficient parallel implementation using traditional
locks. Comparison includes implementation details, performance
ranking and scalability issues. We aim at understanding the effects
the parallelization schemes have on the execution environment with
special focus on the memory system and memory access
characteristics.