Abstract: Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.
Abstract: Riprap is mostly used to prevent erosion by flows
down the steep slopes in river engineering. A total of 53 stability tests
performed on angular riprap with a median stone size ranging from
15 to 278 mm and slope ranging from 1 to 40% are used in this study.
The existing equations for the prediction of medium size of angular
stones are checked for their accuracy using the available data.
Predictions of median size using these equations are not satisfactory
and results show deviation by more than ±20% from the observed
values. A multivariable power regression analysis is performed to
propose a new equation relating the median size with unit discharge,
bed slope, riprap thickness and coefficient of uniformity. The
proposed relationship satisfactorily predicts the median angular stone
size with ±20% error. Further, the required size of the rounded stone
is more than the angular stone for the same unit discharge and the
ratio increases with unit discharge and also with embankment slope
of the riprap.
Abstract: In the hardening energy context, the transport sector
which constitutes a large worldwide energy demand has to be
improving for decrease energy demand and global warming impacts.
In a controversial situation where subsists an increasing demand for
long-distance and high-speed travels, high-speed trains offer many
advantages, as consuming significantly less energy than road or air
transports.
At the project phase of new rail infrastructures, it is nowadays
important to characterize accurately the energy that will be induced
by its operation phase, in addition to other more classical criteria as
construction costs and travel time.
Current literature consumption models used to estimate railways
operation phase are obsolete or not enough accurate for taking into
account the newest train or railways technologies.
In this paper, an updated model of consumption for high-speed is
proposed, based on experimental data obtained from full-scale tests
performed on a new high-speed line. The assessment of the model
is achieved by identifying train parameters and measured power
consumptions for more than one hundred train routes. Perspectives
are then discussed to use this updated model for accurately assess
the energy impact of future railway infrastructures.
Abstract: Recently, much research has been conducted for
security for wireless sensor networks and ubiquitous computing.
Security issues such as authentication and data integrity are major
requirements to construct sensor network systems. Advanced
Encryption Standard (AES) is considered as one of candidate
algorithms for data encryption in wireless sensor networks. In this
paper, we will present the hardware architecture to implement low
power AES crypto module. Our low power AES crypto module has
optimized architecture of data encryption unit and key schedule unit
which could be applicable to wireless sensor networks. We also details
low power design methods used to design our low power AES crypto
module.
Abstract: In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Abstract: Semantic query optimization consists in restricting the
search space in order to reduce the set of objects of interest for a
query. This paper presents an indexing method based on UB-trees
and a static analysis of the constraints associated to the views of the
database and to any constraint expressed on attributes. The result of
the static analysis is a partitioning of the object space into disjoint
blocks. Through Space Filling Curve (SFC) techniques, each
fragment (block) of the partition is assigned a unique identifier,
enabling the efficient indexing of fragments by UB-trees. The search
space corresponding to a range query is restricted to a subset of the
blocks of the partition. This approach has been developed in the
context of a KB-DBMS but it can be applied to any relational
system.
Abstract: Downward turbulent bubbly flows in pipes were
modeled using computational fluid dynamics tools. The
Hydrodynamics, phase distribution and turbulent structure of twophase
air-water flow in a 57.15 mm diameter and 3.06 m length
vertical pipe was modeled by using the 3-D Eulerian-Eulerian
multiphase flow approach. Void fraction, liquid velocity and
turbulent fluctuations profiles were calculated and compared against
experimental data. CFD results are in good agreement with
experimental data.
Abstract: Virtualization-based server consolidation has been
proven to be an ideal technique to solve the server sprawl problem by
consolidating multiple virtualized servers onto a few physical servers
leading to improved resource utilization and return on investment. In
this paper, we solve this problem by using existing servers, which are
heterogeneous and diversely preferred by IT managers. Five practical
consolidation rules are introduced, and a decision model is proposed to
optimally allocate source services to physical target servers while
maximizing the average resource utilization and preference value. Our
model can be regarded as a multi-objective multi-dimension
bin-packing (MOMDBP) problem with constraints, which is strongly
NP-hard. An improved grouping generic algorithm (GGA) is
introduced for the problem. Extensive simulations were performed and
the results are given.
Abstract: Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.
Abstract: The problems associated with wind predictions of
WAsP model in complex terrain are already the target of several
studies in the last decade. In this paper, the influence of surrounding
orography on accuracy of wind data analysis of a train is
investigated. For the case study, a site with complex surrounding
orography is considered. This site is located in Manjil, one of the
windiest cities of Iran. For having precise evaluation of wind regime
in the site, one-year wind data measurements from two metrological
masts are used. To validate the obtained results from WAsP, the
cross prediction between each mast is performed. The analysis
reveals that WAsP model can estimate the wind speed behavior
accurately. In addition, results show that this software can be used
for predicting the wind regime in flat sites with complex surrounding
orography.
Abstract: A synchronous network-on-chip using wormhole packet switching
and supporting guaranteed-completion best-effort with low-priority (LP)
and high-priority (HP) wormhole packet delivery service is presented in
this paper. Both our proposed LP and HP message services deliver a good
quality of service in term of lossless packet completion and in-order message
data delivery. However, the LP message service does not guarantee minimal
completion bound. The HP packets will absolutely use 100% bandwidth of
their reserved links if the HP packets are injected from the source node with
maximum injection. Hence, the service are suitable for small size messages
(less than hundred bytes). Otherwise the other HP and LP messages, which
require also the links, will experience relatively high latency depending on the
size of the HP message. The LP packets are routed using a minimal adaptive
routing, while the HP packets are routed using a non-minimal adaptive routing
algorithm. Therefore, an additional 3-bit field, identifying the packet type,
is introduced in their packet headers to classify and to determine the type
of service committed to the packet. Our NoC prototypes have been also
synthesized using a 180-nm CMOS standard-cell technology to evaluate the
cost of implementing the combination of both services.
Abstract: Microarrays have become the effective, broadly used tools in biological and medical research to address a wide range of problems, including classification of disease subtypes and tumors. Many statistical methods are available for analyzing and systematizing these complex data into meaningful information, and one of the main goals in analyzing gene expression data is the detection of samples or genes with similar expression patterns. In this paper, we express and compare the performance of several clustering methods based on data preprocessing including strategies of normalization or noise clearness. We also evaluate each of these clustering methods with validation measures for both simulated data and real gene expression data. Consequently, clustering methods which are common used in microarray data analysis are affected by normalization and degree of noise and clearness for datasets.
Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: This paper deals with heterogeneous autoregressive
models of realized volatility (HAR-RV models) on high-frequency
data of stock indices in the USA. Its aim is to capture the behavior of
three groups of market participants trading on a daily, weekly and
monthly basis and assess their role in predicting the daily realized
volatility. The benefits of this work lies mainly in the application of
heterogeneous autoregressive models of realized volatility on stock
indices in the USA with a special aim to analyze an impact of the
global financial crisis on applied models forecasting performance.
We use three data sets, the first one from the period before the global
financial crisis occurred in the years 2006-2007, the second one from
the period when the global financial crisis fully hit the U.S. financial
market in 2008-2009 years, and the last period was defined over
2010-2011 years. The model output indicates that estimated realized
volatility in the market is very much determined by daily traders and
in some cases excludes the impact of those market participants who
trade on monthly basis.
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: We propose a method for discrimination and
classification of ovarian with benign, malignant and normal tissue
using independent component analysis and neural networks. The
method was tested for a proteomic patters set from A database, and
radial basis functions neural networks. The best performance was
obtained with probabilistic neural networks, resulting I 99% success
rate, with 98% of specificity e 100% of sensitivity.
Abstract: In this paper, we propose an effective relay
communication for layered video transmission as an alternative to
make the most of limited resources in a wireless communication
network where loss often occurs. Relaying brings stable multimedia
services to end clients, compared to multiple description coding
(MDC). Also, retransmission of only parity data about one or more
video layer using channel coder to the end client of the relay device is
paramount to the robustness of the loss situation. Using these
methods in resource-constrained environments, such as real-time user
created content (UCC) with layered video transmission, can provide
high-quality services even in a poor communication environment.
Minimal services are also possible. The mathematical analysis shows
that the proposed method reduced the probability of GOP loss rate
compared to MDC and raptor code without relay. The GOP loss rate
is about zero, while MDC and raptor code without relay have a GOP
loss rate of 36% and 70% in case of 10% frame loss rate.
Abstract: Microarrays technique allows the simultaneous measurements of the expression levels of thousands of mRNAs. By mining this data one can identify the dynamics of the gene expression time series. By recourse of principal component analysis, we uncover the circadian rhythmic patterns underlying the gene expression profiles from Cyanobacterium Synechocystis. We applied PCA to reduce the dimensionality of the data set. Examination of the components also provides insight into the underlying factors measured in the experiments. Our results suggest that all rhythmic content of data can be reduced to three main components.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.