Abstract: This paper proposed a novel model for short term load
forecast (STLF) in the electricity market. The prior electricity
demand data are treated as time series. The model is composed of
several neural networks whose data are processed using a wavelet
technique. The model is created in the form of a simulation program
written with MATLAB. The load data are treated as time series data.
They are decomposed into several wavelet coefficient series using
the wavelet transform technique known as Non-decimated Wavelet
Transform (NWT). The reason for using this technique is the belief
in the possibility of extracting hidden patterns from the time series
data. The wavelet coefficient series are used to train the neural
networks (NNs) and used as the inputs to the NNs for electricity load
prediction. The Scale Conjugate Gradient (SCG) algorithm is used as
the learning algorithm for the NNs. To get the final forecast data, the
outputs from the NNs are recombined using the same wavelet
technique. The model was evaluated with the electricity load data of
Electronic Engineering Department in Mandalay Technological
University in Myanmar. The simulation results showed that the
model was capable of producing a reasonable forecasting accuracy in
STLF.
Abstract: Computers are being integrated in the various aspects
of human every day life in different shapes and abilities. This fact
has intensified a requirement for the software development
technologies which is ability to be: 1) portable, 2) adaptable, and 3)
simple to develop. This problem is also known as the Pervasive
Computing Problem (PCP) which can be implemented in different
ways, each has its own pros and cons and Context Oriented
Programming (COP) is one of the methods to address the PCP.
In this paper a design for a COP framework, a context aware
framework, is presented which has eliminated weak points of a
previous design based on interpreter languages, while introducing the
compiler languages power in implementing these frameworks.
The key point of this improvement is combining COP and
Dependency Injection (DI) techniques. Both old and new frameworks
are analyzed to show advantages and disadvantages. Finally a
simulation of both designs is proposed to indicating that the practical
results agree with the theoretical analysis while the new design runs
almost 8 times faster.
Abstract: This study aims to propose three evaluation methods to
evaluate the Tokyo Cap and Trade Program when emissions trading is
performed virtually among enterprises, focusing on carbon dioxide
(CO2), which is the only emitted greenhouse gas that tends to increase.
The first method clarifies the optimum reduction rate for the highest
cost benefit, the second discusses emissions trading among enterprises
through market trading, and the third verifies long-term emissions
trading during the term of the plan (2010-2019), checking the validity
of emissions trading partly using Geographic Information Systems
(GIS). The findings of this study can be summarized in the following
three points.
1. Since the total cost benefit is the greatest at a 44% reduction rate, it
is possible to set it more highly than that of the Tokyo Cap and
Trade Program to get more total cost benefit.
2. At a 44% reduction rate, among 320 enterprises, 8 purchasing
enterprises and 245 sales enterprises gain profits from emissions
trading, and 67 enterprises perform voluntary reduction without
conducting emissions trading. Therefore, to further promote
emissions trading, it is necessary to increase the sales volumes of
emissions trading in addition to sales enterprises by increasing the
number of purchasing enterprises.
3. Compared to short-term emissions trading, there are few enterprises
which benefit in each year through the long-term emissions trading
of the Tokyo Cap and Trade Program. Only 81 enterprises at the
most can gain profits from emissions trading in FY 2019. Therefore,
by setting the reduction rate more highly, it is necessary to increase
the number of enterprises that participate in emissions trading and
benefit from the restraint of CO2 emissions.
Abstract: The Shanghai Cooperation Organization is one of the successful outcomes of China's foreign policy since the end of the Cold war. The expansion of multilateral ties all over the world by dint of pursuing institutional strategies as SCO, identify China as a more constructive power. SCO became a new model of cooperation that was formed on remains of collapsed Soviet system, and predetermined China's geopolitical role in the region. As the fast developing effective regional mechanism, SCO today has more of external impact on the international system and forms a new type of interaction for promoting China's grand strategy of 'peaceful rise'.
Abstract: This paper describes a system-level SoC energy
consumption estimation method based on a dynamic behavior of
embedded software in the early stages of the SoC development. A
major problem of SOC development is development rework caused by
unreliable energy consumption estimation at the early stages. The
energy consumption of an SoC used in embedded systems is strongly
affected by the dynamic behavior of the software. At the early stages
of SoC development, modeling with a high level of abstraction is
required for both the dynamic behavior of the software, and the
behavior of the SoC. We estimate the energy consumption by a UML
model-based simulation. The proposed method is applied for an actual
embedded system in an MFP. The energy consumption estimation of
the SoC is more accurate than conventional methods and this proposed
method is promising to reduce the chance of development rework in
the SoC development. ∈
Abstract: In this article, by using fuzzy AHP and TOPSIS
technique we propose a new method for project selection problem.
After reviewing four common methods of comparing alternatives
investment (net present value, rate of return, benefit cost analysis
and payback period) we use them as criteria in AHP tree. In this
methodology by utilizing improved Analytical Hierarchy Process
by Fuzzy set theory, first we try to calculate weight of each
criterion. Then by implementing TOPSIS algorithm, assessment of
projects has been done. Obtained results have been tested in a
numerical example.
Abstract: Groups where the discrete logarithm problem (DLP) is believed to be intractable have proved to be inestimable building blocks for cryptographic applications. They are at the heart of numerous protocols such as key agreements, public-key cryptosystems, digital signatures, identification schemes, publicly verifiable secret sharings, hash functions and bit commitments. The search for new groups with intractable DLP is therefore of great importance.The goal of this article is to study elliptic curves over the ring Fq[], with Fq a finite field of order q and with the relation n = 0, n ≥ 3. The motivation for this work came from the observation that several practical discrete logarithm-based cryptosystems, such as ElGamal, the Elliptic Curve Cryptosystems . In a first time, we describe these curves defined over a ring. Then, we study the algorithmic properties by proposing effective implementations for representing the elements and the group law. In anther article we study their cryptographic properties, an attack of the elliptic discrete logarithm problem, a new cryptosystem over these curves.
Abstract: In recent years, everything is trending toward digitalization
and with the rapid development of the Internet technologies,
digital media needs to be transmitted conveniently over the network.
Attacks, misuse or unauthorized access of information is of great
concern today which makes the protection of documents through
digital media a priority problem. This urges us to devise new data
hiding techniques to protect and secure the data of vital significance.
In this respect, steganography often comes to the fore as a tool for
hiding information. Steganography is a process that involves hiding
a message in an appropriate carrier like image or audio. It is of
Greek origin and means "covered or hidden writing". The goal of
steganography is covert communication. Here the carrier can be sent
to a receiver without any one except the authenticated receiver only
knows existence of the information. Considerable amount of work
has been carried out by different researchers on steganography. In this
work the authors propose a novel Steganographic method for hiding
information within the spatial domain of the gray scale image. The
proposed approach works by selecting the embedding pixels using
some mathematical function and then finds the 8 neighborhood of
the each selected pixel and map each bit of the secret message in
each of the neighbor pixel coordinate position in a specified manner.
Before embedding a checking has been done to find out whether the
selected pixel or its neighbor lies at the boundary of the image or not.
This solution is independent of the nature of the data to be hidden
and produces a stego image with minimum degradation.
Abstract: Performance of millimeter-wave (mm-wave) multiband
orthogonal frequency division multiplexing (MB-OFDM) ultrawideband
(UWB) signal generation using frequency quadrupling
technique and transmission over fiber is experimentally investigated.
The frequency quadrupling is achived by using only one Mach-
Zehnder modulator (MZM) that is biased at maximum transmission
(MATB) point. At the output, a frequency quadrupling signal is
obtained then sent to a second MZM. This MZM is used for MBOFDM
UWB signal modulation. In this work, we demonstrate 30-
GHz mm-wave wireless that carries three-bands OFDM UWB
signals, and error vector magnitude (EVM) is used to analyze the
transmission quality. It is found that our proposed technique leads to
an improvement of 3.5 dB in EVM at 40% of local oscillator (LO)
modulation with comparison to the technique using two cascaded
MZMs biased at minimum transmission (MITB) point.
Abstract: The least mean square (LMS) algorithmis one of the
most well-known algorithms for mobile communication systems
due to its implementation simplicity. However, the main limitation
is its relatively slow convergence rate. In this paper, a booster
using the concept of Markov chains is proposed to speed up the
convergence rate of LMS algorithms. The nature of Markov
chains makes it possible to exploit the past information in the
updating process. Moreover, since the transition matrix has a
smaller variance than that of the weight itself by the central limit
theorem, the weight transition matrix converges faster than the
weight itself. Accordingly, the proposed Markov-chain based
booster thus has the ability to track variations in signal
characteristics, and meanwhile, it can accelerate the rate of
convergence for LMS algorithms. Simulation results show that the
LMS algorithm can effectively increase the convergence rate and
meantime further approach the Wiener solution, if the
Markov-chain based booster is applied. The mean square error is
also remarkably reduced, while the convergence rate is improved.
Abstract: In the classical buckling analysis of rectangular plates
subjected to the concurrent action of shear and uniaxial forces, the
Euler shear buckling stress is generally evaluated separately, so that
no influence on the shear buckling coefficient, due to the in-plane
tensile or compressive forces, is taken into account.
In this paper the buckling problem of simply supported rectangular
plates, under the combined action of shear and uniaxial forces, is
discussed from the beginning, in order to obtain new project formulas
for the shear buckling coefficient that take into account the presence
of uniaxial forces.
Furthermore, as the classical expression of the shear buckling
coefficient for simply supported rectangular plates is considered only
a “rough" approximation, as the exact one is defined by a system of
intersecting curves, the convergence and the goodness of the classical
solution are analyzed, too.
Finally, as the problem of the Euler shear buckling stress
evaluation is a very important topic for a variety of structures, (e.g.
ship ones), two numerical applications are carried out, in order to
highlight the role of the uniaxial stresses on the plating scantling
procedures and the goodness of the proposed formulas.
Abstract: Many agent-oriented software engineering
methodologies have been proposed for software developing; however
their application is still limited due to their lack of maturity.
Evaluating the strengths and weaknesses of these methodologies
plays an important role in improving them and in developing new
stronger methodologies. This paper presents an evaluation framework
for agent-oriented methodologies, which addresses six major areas:
concepts, notation, process, pragmatics, support for software
engineering and marketability. The framework is then used to
evaluate the Gaia methodology to identify its strengths and
weaknesses, and to prove the ability of the framework for promoting
the agent-oriented methodologies by detecting their weaknesses in
detail.
Abstract: Extensive use of the Internet coupled with the
marvelous growth in e-commerce and m-commerce has created a
huge demand for information security. The Secure Socket Layer
(SSL) protocol is the most widely used security protocol in the
Internet which meets this demand. It provides protection against
eaves droppings, tampering and forgery. The cryptographic
algorithms RC4 and HMAC have been in use for achieving security
services like confidentiality and authentication in the SSL. But recent
attacks against RC4 and HMAC have raised questions in the
confidence on these algorithms. Hence two novel cryptographic
algorithms MAJE4 and MACJER-320 have been proposed as
substitutes for them. The focus of this work is to demonstrate the
performance of these new algorithms and suggest them as dependable
alternatives to satisfy the need of security services in SSL. The
performance evaluation has been done by using practical
implementation method.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: One of the determinants of a firm-s prosperity is the
customers- perceived service quality and satisfaction. While service
quality is wide in scope, and consists of various dimensions, there
may be differences in the relative importance of these dimensions in
affecting customers- overall satisfaction of service quality.
Identifying the relative rank of different dimensions of service quality
is very important in that it can help managers to find out which
service dimensions have a greater effect on customers- overall
satisfaction. Such an insight will consequently lead to more effective
resource allocation which will finally end in higher levels of
customer satisfaction. This issue –despite its criticality- has not
received enough attention so far. Therefore, using a sample of 240
bank customers in Iran, an artificial neural network is developed to
address this gap in the literature. As customers- evaluation of service
quality is a subjective process, artificial neural networks –as a brain
metaphor- may appear to have a potentiality to model such a
complicated process. Proposing a neural network which is able to
predict the customers- overall satisfaction of service quality with a
promising level of accuracy is the first contribution of this study. In
addition, prioritizing the service quality dimensions in affecting
customers- overall satisfaction –by using sensitivity analysis of
neural network- is the second important finding of this paper.
Abstract: We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.
Abstract: In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.
Abstract: This paper presents the design related to the
electronic system design of the respiratory signal, including phases
for processing, followed by the transmission and reception of this
signal and finally display. The processing of this signal is added to
the ECG and temperature sign, put up last year. Under this scheme is
proposed that in future also be conditioned blood pressure signal
under the same final printed circuit and worked.
Abstract: A bond graph model of a hydroelectric plant is
proposed. In order to analyze the system some structural properties
of a bond graph are used. The structural controllability of
the hydroelctric plant is described. Also, the steady state of the
state variables applying the bond graph in a derivative causality
assignment is obtained. Finally, simulation results of the system
are shown.
Abstract: Public health surveillance system focuses on outbreak detection and data sources used. Variation or aberration in the frequency distribution of health data, compared to historical data is often used to detect outbreaks. It is important that new techniques be developed to improve the detection rate, thereby reducing wastage of resources in public health. Thus, the objective is to developed technique by applying frequent mining and outlier mining techniques in outbreak detection. 14 datasets from the UCI were tested on the proposed technique. The performance of the effectiveness for each technique was measured by t-test. The overall performance shows that DTK can be used to detect outlier within frequent dataset. In conclusion the outbreak detection technique using anomaly-based on frequent-outlier technique can be used to identify the outlier within frequent dataset.