Abstract: In the past decade, artificial neural networks (ANNs)
have been regarded as an instrument for problem-solving and
decision-making; indeed, they have already done with a substantial
efficiency and effectiveness improvement in industries and businesses.
In this paper, the Back-Propagation neural Networks (BPNs) will be
modulated to demonstrate the performance of the collaborative
forecasting (CF) function of a Collaborative Planning, Forecasting and
Replenishment (CPFR®) system. CPFR functions the balance between
the sufficient product supply and the necessary customer demand in a
Supply and Demand Chain (SDC). Several classical standard BPN will
be grouped, collaborated and exploited for the easy implementation of
the proposed modular ANN framework based on the topology of a
SDC. Each individual BPN is applied as a modular tool to perform the
task of forecasting SKUs (Stock-Keeping Units) levels that are
managed and supervised at a POS (point of sale), a wholesaler, and a
manufacturer in an SDC. The proposed modular BPN-based CF
system will be exemplified and experimentally verified using lots of
datasets of the simulated SDC. The experimental results showed that a
complex CF problem can be divided into a group of simpler
sub-problems based on the single independent trading partners
distributed over SDC, and its SKU forecasting accuracy was satisfied
when the system forecasted values compared to the original simulated
SDC data. The primary task of implementing an autonomous CF
involves the study of supervised ANN learning methodology which
aims at making “knowledgeable" decision for the best SKU sales plan
and stocks management.
Abstract: In this paper, a neural tree (NT) classifier having a
simple perceptron at each node is considered. A new concept for
making a balanced tree is applied in the learning algorithm of the
tree. At each node, if the perceptron classification is not accurate and
unbalanced, then it is replaced by a new perceptron. This separates
the training set in such a way that almost the equal number of patterns
fall into each of the classes. Moreover, each perceptron is trained only
for the classes which are present at respective node and ignore other
classes. Splitting nodes are employed into the neural tree architecture
to divide the training set when the current perceptron node repeats
the same classification of the parent node. A new error function based
on the depth of the tree is introduced to reduce the computational
time for the training of a perceptron. Experiments are performed to
check the efficiency and encouraging results are obtained in terms of
accuracy and computational costs.
Abstract: In this paper we propose an NLP-based method for
Ontology Population from texts and apply it to semi automatic
instantiate a Generic Knowledge Base (Generic Domain Ontology) in
the risk management domain. The approach is semi-automatic and
uses a domain expert intervention for validation. The proposed
approach relies on a set of Instances Recognition Rules based on
syntactic structures, and on the predicative power of verbs in the
instantiation process. It is not domain dependent since it heavily
relies on linguistic knowledge.
A description of an experiment performed on a part of the
ontology of the PRIMA1 project (supported by the European
community) is given. A first validation of the method is done by
populating this ontology with Chemical Fact Sheets from
Environmental Protection Agency2. The results of this experiment
complete the paper and support the hypothesis that relying on the
predicative power of verbs in the instantiation process improves the
performance.
Abstract: The successful implementation of Service-Oriented Architecture (SOA) is not confined to Information Technology systems and required changes of the whole enterprise. In order to adapt IT and business, the enterprise requires adequate and measurable methods. The adoption of SOA creates new problem with regard to measuring and analysis the performance. In fact the enterprise should investigate to what extent the development of services will increase the value of business. It is required for every business to measure the extent of SOA adaptation with the goals of enterprise. Moreover, precise performance metrics and their combination with the advanced evaluation methodologies as a solution should be defined. The aim of this paper is to present a systematic methodology for designing a measurement system at the technical and business levels, so that: (1) it will determine measurement metrics precisely (2) the results will be analysed by mapping identified metrics to the measurement tools.
Abstract: Shape memory alloy (SMA) actuators have found a
wide range of applications due to their unique properties such as high
force, small size, lightweight and silent operation. This paper presents
the development of compact (SMA) actuator and cooling system in
one unit. This actuator is developed for multi-fingered hand. It
consists of nickel-titanium (Nitinol) SMA wires in compact forming.
The new arrangement insulates SMA wires from the human body by
housing it in a heat sink and uses a thermoelectric device for rejecting
heat to improve the actuator performance. The study uses
optimization methods for selecting the SMA wires geometrical
parameters and the material of a heat sink. The experimental work
implements the actuator prototype and measures its response.
Abstract: Spare parts inventory management is one of the major
areas of inventory research. Analysis of recent literature showed that
an approach integrating spare parts classification, demand
forecasting, and stock control policies is essential; however, adapting
this integrated approach is limited. This work presents an integrated
framework for spare part inventory management and an Excel based
application developed for the implementation of the proposed
framework. A multi-criteria analysis has been used for spare
classification. Forecasting of spare parts- intermittent demand has
been incorporated into the application using three different
forecasting models; namely, normal distribution, exponential
smoothing, and Croston method. The application is also capable of
running with different inventory control policies. To illustrate the
performance of the proposed framework and the developed
application; the framework is applied to different items at a service
organization. The results achieved are presented and possible areas
for future work are highlighted.
Abstract: Support vector regression (SVR) has been regarded
as a state-of-the-art method for approximation and regression. The
importance of kernel function, which is so-called admissible support
vector kernel (SV kernel) in SVR, has motivated many studies
on its composition. The Gaussian kernel (RBF) is regarded as a
“best" choice of SV kernel used by non-expert in SVR, whereas
there is no evidence, except for its superior performance on some
practical applications, to prove the statement. Its well-known that
reproducing kernel (R.K) is also a SV kernel which possesses many
important properties, e.g. positive definiteness, reproducing property
and composing complex R.K by simpler ones. However, there are a
limited number of R.Ks with explicit forms and consequently few
quantitative comparison studies in practice. In this paper, two R.Ks,
i.e. SV kernels, composed by the sum and product of a translation
invariant kernel in a Sobolev space are proposed. An exploratory
study on the performance of SVR based general R.K is presented
through a systematic comparison to that of RBF using multiple
criteria and synthetic problems. The results show that the R.K is
an equivalent or even better SV kernel than RBF for the problems
with more input variables (more than 5, especially more than 10) and
higher nonlinearity.
Abstract: This paper focuses on developing an integrated
reliable and sophisticated model for ultra large wind turbines And to
study the performance and analysis of vector control on large wind
turbines. With the advance of power electronics technology, direct
driven multi-pole radial flux PMSG (Permanent Magnet Synchronous
Generator) has proven to be a good choice for wind turbines
manufacturers. To study the wind energy conversion systems, it is
important to develop a wind turbine simulator that is able to produce
realistic and validated conditions that occur in real ultra MW wind
turbines. Three different packages are used to simulate this model,
namely, Turbsim, FAST and Simulink. Turbsim is a Full field wind
simulator developed by National Renewable Energy Laboratory
(NREL). The wind turbine mechanical parts are modeled by FAST
(Fatigue, Aerodynamics, Structures and Turbulence) code which is
also developed by NREL. Simulink is used to model the PMSG, full
scale back to back IGBT converters, and the grid.
Abstract: In this paper, the gain spectrum of EDFA has been broadened by implementing HTE configuration for S and C band. On using this configuration an amplification bandwidth of 76nm ranging from 1479nm to 1555nm with a peak gain of 26dB has been obtained.
Abstract: Although face recognition seems as an easy task for
human, automatic face recognition is a much more challenging task
due to variations in time, illumination and pose. In this paper, the
influence of time-lapse on visible and thermal images is examined.
Orthogonal moment invariants are used as a feature extractor to
analyze the effect of time-lapse on thermal and visible images and the
results are compared with conventional Principal Component
Analysis (PCA). A new triangle square ratio criterion is employed
instead of Euclidean distance to enhance the performance of nearest
neighbor classifier. The results of this study indicate that the ideal
feature vectors can be represented with high discrimination power
due to the global characteristic of orthogonal moment invariants.
Moreover, the effect of time-lapse has been decreasing and enhancing
the accuracy of face recognition considerably in comparison with
PCA. Furthermore, our experimental results based on moment
invariant and triangle square ratio criterion show that the proposed
approach achieves on average 13.6% higher in recognition rate than
PCA.
Abstract: As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.
Abstract: With increasing complexity in electronic systems
there is a need for system level anomaly detection and fault isolation.
Anomaly detection based on vector similarity to a training set is used
in this paper through two approaches, one the preserves the original
information, Mahalanobis Distance (MD), and the other that
compresses the data into its principal components, Projection Pursuit
Analysis. These methods have been used to detect deviations in
system performance from normal operation and for critical parameter
isolation in multivariate environments. The study evaluates the
detection capability of each approach on a set of test data with known
faults against a baseline set of data representative of such “healthy"
systems.
Abstract: True integration of multimedia services over wired or
wireless networks increase the productivity and effectiveness in
today-s networks. IP Multimedia Subsystems are Next Generation
Network architecture to provide the multimedia services over fixed
or mobile networks. This paper proposes an extended SIP-based QoS
Management architecture for IMS services over underlying IP access
networks. To guarantee the end-to-end QoS for IMS services in
interconnection backbone, SIP based proxy Modules are introduced
to support the QoS provisioning and to reduce the handoff disruption
time over IP access networks. In our approach these SIP Modules
implement the combination of Diffserv and MPLS QoS mechanisms
to assure the guaranteed QoS for real-time multimedia services. To
guarantee QoS over access networks, SIP Modules make QoS
resource reservations in advance to provide best QoS to IMS users
over heterogeneous networks. To obtain more reliable multimedia
services, our approach allows the use of SCTP protocol over SIP
instead of UDP due to its multi-streaming feature. This architecture
enables QoS provisioning for IMS roaming users to differentiate IMS
network from other common IP networks for transmission of realtime
multimedia services. To validate our approach simulation
models are developed on short scale basis. The results show that our
approach yields comparable performance for efficient delivery of
IMS services over heterogeneous IP access networks.
Abstract: This paper describes the gain and noise performances
of discrete Raman amplifier as a function of fiber lengths and the
signal input powers for different pump configurations. Simulation has
been done by using optisystem 7.0 software simulation at signal
wavelength of 1550 nm and a pump wavelength of 1450nm. The
results showed that the gain is higher in bidirectional pumping than in
counter pumping, the gain changes with increasing the fiber length
while the noise figure remain the same for short fiber lengths and the
gain saturates differently for different pumping configuration at
different fiber lengths and power levels of the signal.
Abstract: Testing is an activity that is required both in the
development and maintenance of the software development life cycle
in which Integration Testing is an important activity. Integration
testing is based on the specification and functionality of the software
and thus could be called black-box testing technique. The purpose of
integration testing is testing integration between software
components. In function or system testing, the concern is with overall
behavior and whether the software meets its functional specifications
or performance characteristics or how well the software and
hardware work together. This explains the importance and necessity
of IT for which the emphasis is on interactions between modules and
their interfaces. Software errors should be discovered early during
IT to reduce the costs of correction. This paper introduces a new type
of integration error, presenting an overview of Integration Testing
techniques with comparison of each technique and also identifying
which technique detects what type of error.
Abstract: We propose a downlink multiple-input multipleoutput
(MIMO) multi-carrier code division multiple access (MCCDMA)
system with adaptive beamforming algorithm for smart
antennas. The algorithm used in this paper is based on the Least
Mean Square (LMS), with pilot channel estimation (PCE) and the
zero forcing equalizer (ZFE) in the receiver, requiring reference
signal and no knowledge channel. MC-CDMA is studied in a
multiple antenna context in order to efficiently exploit robustness
against multipath effects and multi-user flexibility of MC-CDMA and
channel diversity offered by MIMO systems for radio mobile
channels. Computer simulations, considering multi-path Rayleigh
Fading Channel, interference inter symbol and interference are
presented to verify the performance. Simulation results show that the
scheme achieves good performance in a multi-user system.
Abstract: This study examined the effects of eight weeks of
whole-body vibration training (WBVT) on vertical and decuple jump
performance in handball athletes. Sixteen collegiate Level I handball
athletes volunteered for this study. They were divided equally as
control group and experimental group (EG). During the period of the
study, all athletes underwent the same handball specific training, but
the EG received additional WBVT (amplitude: 2 mm, frequency: 20 -
40 Hz) three time per week for eight consecutive weeks. The vertical
jump performance was evaluated according to the maximum height of
squat jump (SJ) and countermovement jump (CMJ). Single factor
ANCOVA was used to examine the differences in each parameter
between the groups after training with the pretest values as a covariate.
The statistic significance was set at p < .05. After 8 weeks WBVT, the
EG had significantly improved the maximal height of SJ (40.92 ± 2.96
cm vs. 48.40 ± 4.70 cm, F = 5.14, p < .05) and the maximal height
CMJ (47.25 ± 7.48 cm vs. 52.20 ± 6.25 cm, F = 5.31, p < .05). 8 weeks
of additional WBVT could improve the vertical and decuple jump
performance in handball athletes. Enhanced motor unit
synchronization and firing rates, facilitated muscular contraction
stretch-shortening cycle, and improved lower extremity
neuromuscular coordination could account for these enhancements.
Abstract: It is observed that the Weighted least-square (WLS)
technique, including the modifications, results in equiripple error
curve. The resultant error as a percent of the ideal value is highly
non-uniformly distributed over the range of frequencies for which the
differentiator is designed. The present paper proposes a modification
to the technique so that the optimization procedure results in lower
maximum relative error compared to the ideal values. Simulation
results for first order as well as higher order differentiators are given
to illustrate the excellent performance of the proposed method.
Abstract: This study examines perception of environmental
approach in small and medium-sized enterprises (SMEs) – the
process by which firms integrate environmental concern into
business. Based on a review of the literature, the paper synthesizes
focus on environmental issues with the reflection in a case study in
the Czech Republic. Two themes of corporate environmentalism are
discussed – corporate environmental orientation and corporate
stances toward environmental concerns. It provides theoretical
material on greening organizational culture that is helpful in
understanding the response of contemporary business to
environmental problems. We integrate theoretical predictions with
empirical findings confronted with reality. Scales to measure these
themes are tested in a survey of managers in 229 Czech firms. We
used the process of in-depth questioning. The research question was
derived and answered in the context of the corresponding literature
and conducted research. A case study showed us that environmental
approach is variety different (depending on the size of the firm) in
SMEs sector. The results of the empirical mapping demonstrate
Czech company’s approach to environment and define the problem
areas and pinpoint the main limitation in the expansion of
environmental aspects. We contribute to the debate for recognition of
the particular role of environmental issues in business reality.
Abstract: Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.