Abstract: A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.
Abstract: For a given specific problem an efficient algorithm has
been the matter of study. However, an alternative approach orthogonal
to this approach comes out, which is called a reduction. In general
for a given specific problem this reduction approach studies how to
convert an original problem into subproblems. This paper proposes
a formal modeling language to support this reduction approach. We
show three examples from the wide area of learning problems. The
benefit is a fast prototyping of algorithms for a given new problem.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: The social force model which belongs to the
microscopic pedestrian studies has been considered as the supremacy
by many researchers and due to the main feature of reproducing the
self-organized phenomena resulted from pedestrian dynamic. The
Preferred Force which is a measurement of pedestrian-s motivation to
adapt his actual velocity to his desired velocity is an essential term on
which the model was set up. This Force has gone through stages of
development: first of all, Helbing and Molnar (1995) have modeled
the original force for the normal situation. Second, Helbing and his
co-workers (2000) have incorporated the panic situation into this
force by incorporating the panic parameter to account for the panic
situations. Third, Lakoba and Kaup (2005) have provided the
pedestrians some kind of intelligence by incorporating aspects of the
decision-making capability. In this paper, the authors analyze the
most important incorporations into the model regarding the preferred
force. They make comparisons between the different factors of these
incorporations. Furthermore, to enhance the decision-making ability
of the pedestrians, they introduce additional features such as the
familiarity factor to the preferred force to let it appear more
representative of what actually happens in reality.
Abstract: The objective of this research is to study principal
component analysis for classification of 67 soil samples collected from
different agricultural areas in the western part of Thailand. Six soil
properties were measured on the soil samples and are used as original
variables. Principal component analysis is applied to reduce the
number of original variables. A model based on the first two
principal components accounts for 72.24% of total variance. Score
plots of first two principal components were used to map with
agricultural areas divided into horticulture, field crops and wetland.
The results showed some relationships between soil properties and
agricultural areas. PCA was shown to be a useful tool for agricultural
areas classification based on soil properties.
Abstract: The paper proposes an approach for design of modular
systems based on original technique for modeling and formulation of
combinatorial optimization problems. The proposed approach is
described on the example of personal computer configuration design.
It takes into account the existing compatibility restrictions between
the modules and can be extended and modified to reflect different
functional and users- requirements. The developed design modeling
technique is used to formulate single objective nonlinear mixedinteger
optimization tasks. The practical applicability of the
developed approach is numerically tested on the basis of real modules
data. Solutions of the formulated optimization tasks define the
optimal configuration of the system that satisfies all compatibility
restrictions and user requirements.
Abstract: Traveling salesman problem (TSP) is hard to resolve
when the number of cities and routes become large. The frequency
graph is constructed to tackle the problem. A frequency graph
maintains the topological relationships of the original weighted graph.
The numbers on the edges are the frequencies of the edges emulated
from the local optimal Hamiltonian paths. The simplest kind of local
optimal Hamiltonian paths are computed based on the four vertices
and three lines inequality. The search algorithm is given to find the
optimal Hamiltonian circuit based on the frequency graph. The
experiments show that the method can find the optimal Hamiltonian
circuit within several trials.
Abstract: This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Abstract: The recent developments in computing and
communication technology permit to users to access multimedia
documents with variety of devices (PCs, PDAs, mobile phones...)
having heterogeneous capabilities. This diversification of supports
has trained the need to adapt multimedia documents according to
their execution contexts. A semantic framework for multimedia
document adaptation based on the conceptual neighborhood graphs
was proposed. In this framework, adapting consists on finding
another specification that satisfies the target constraints and which is
as close as possible from the initial document. In this paper, we
propose a new way of building the conceptual neighborhood graphs
to best preserve the proximity between the adapted and the original
documents and to deal with more elaborated relations models by
integrating the relations relaxation graphs that permit to handle the
delays and the distances defined within the relations.
Abstract: Possible advantages of technology in educational
context required the defining boundaries of formal and informal
learning. Increasing opportunity to ubiquitous learning by
technological support has revealed a question of how to discover
the potential of individuals in the spontaneous environments such as
social networks. This seems to be related with the question of what
purposes in social networks have been being used? Social networks
provide various advantages in educational context as collaboration,
knowledge sharing, common interests, active participation and
reflective thinking. As a consequence of these, the purpose of this
study is composed of proposing a new model that could determine
factors which effect adoption of social network applications for usage
in educational context. While developing a model proposal, the
existing adoption and diffusion models have been reviewed and they
are thought to be suitable on handling an original perspective instead
of using completely other diffusion or acceptance models because of
different natures of education from other organizations. In the
proposed model; social factors, perceived ease of use, perceived
usefulness and innovativeness are determined four direct constructs
that effect adoption process. Facilitating conditions, image,
subjective norms and community identity are incorporated to model
as antecedents of these direct four constructs.
Abstract: Logistics is part of the supply chain processes that plans, implements, and controls the efficient and effective forward and reverse flow and storage of goods, services, and related information between the point of origin and the point of consumption in order to meet customer requirements. This research aims to investigate the current status and future direction of the use of Information Technology (IT) for logistics, focusing on Supply Chain Management (SCM) and E-Commerce adoption in Johor. Therefore, this research stresses on the type of technology being adopted, factors, benefits and barriers affecting the innovation in SCM and ECommerce technology adoption among Logistics Service Providers (LSP). A mailed questionnaire survey was conducted to collect data from 265 logistics companies in Johor. The research revealed that SCM technology adoption among LSP was higher as they had adopted SCM technology in various business processes while they perceived a high level of benefits from SCM adoption. Obviously, ECommerce technology adoption among LSP is relatively low.
Abstract: As originally designed for wired networks, TCP (transmission control protocol) congestion control mechanism is triggered into action when packet loss is detected. This implicit assumption for packet loss mostly due to network congestion does not work well in Mobile Ad Hoc Network, where there is a comparatively high likelihood of packet loss due to channel errors and node mobility etc. Such non-congestion packet loss, when dealt with by congestion control mechanism, causes poor TCP performance in MANET. In this study, we continue to investigate the impact of the interaction between transport protocols and on-demand routing protocols on the performance and stability of 802.11 multihop networks. We evaluate the important wireless networking events caused routing change, and propose a cross layer method to delay the unnecessary routing changes, only need to add a sensitivity parameter α , which represents the on-demand routing-s reaction to link failure of MAC layer. Our proposal is applicable to the plain 802.11 networking environment, the simulation results that this method can remarkably improve the stability and performance of TCP without any modification on TCP and MAC protocol.
Abstract: This study proposes a materials procurement contracts
model to which the zero-cost collar option is applied for heading price
fluctuation risks in construction.The material contract model based on
the collar option that consists of the call option striking zone of the
construction company(the buyer) following the materials price
increase andthe put option striking zone of the material vendor(the
supplier) following a materials price decrease. This study first
determined the call option strike price Xc of the construction company
by a simple approach: it uses the predicted profit at the project starting
point and then determines the strike price of put option Xp that has an
identical option value, which completes the zero-cost material
contract.The analysis results indicate that the cost saving of the
construction company increased as Xc decreased. This was because the
critical level of the steel materials price increasewas set at a low level.
However, as Xc decreased, Xpof a put option that had an identical
option value gradually increased. Cost saving increased as Xc
decreased. However, as Xp gradually increased, the risk of loss from a
construction company increased as the steel materials price decreased.
Meanwhile, cost saving did not occur for the construction company,
because of volatility. This result originated in the zero-cost features of
the two-way contract of the collar option. In the case of the regular
one-way option, the transaction cost had to be subtracted from the cost
saving. The transaction cost originated from an option value that
fluctuated with the volatility. That is, the cost saving of the one-way
option was affected by the volatility. Meanwhile, even though the
collar option with zero transaction cost cut the connection between
volatility and cost saving, there was a risk of exercising the put option.
Abstract: In recent years, the use of vector variance as a
measure of multivariate variability has received much attention in
wide range of statistics. This paper deals with a more economic
measure of multivariate variability, defined as vector variance minus
all duplication elements. For high dimensional data, this will increase
the computational efficiency almost 50 % compared to the original
vector variance. Its sampling distribution will be investigated to make
its applications possible.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Abstract: The Eulerian numerical method is proposed to analyze
the explosion in tunnel. Based on this method, an original software
M-MMIC2D is developed by Cµ program language. With this
software, the explosion problem in the tunnel with three
expansion-chambers is numerically simulated, and the results are
found to be in full agreement with the observed experimental data.
Abstract: The culture of riding heavy motorcycles originates
from advanced countries and mainly comes from Europe, North
America, and Japan. Heavy duty motorcycle riders are different from
people who view motorcycles as a convenient mean of transportation.
They regard riding them as a kind of enjoyment and high-level taste.
The activities of riding heavy duty motorcycles have formes a
distinctive landscape in domestic land in Taiwan. Previous studies
which explored motorcycle culture in Taiwan still focused on the
objects of motorcycle engine displacement under 50 cc.. The study
aims to study the heavy duty motorcycles of engine displacement over
550 cc. and explores where their attractiveness is. For finding the
attractiveness of heavy duty motorcycle, the study chooses Miryoku
Engineering (Preference-Based Design) approach. Two steps are
adopted to proceed the research. First, through arranging the letters
obtained from interviewing experts, EGM (The Evaluation Grid
Method) was applied to find out the structure of attractiveness. The
attractive styles are eye-dazzling, leisure, classic, and racing
competitive styles. Secondarily, Quantification Theory Type I analysis
was adopted as a tool for analyzing the importance of attractiveness.
The relationship between style and attractive parts was also discussed.
The results could contribute to the design and research development of
heavy duty motorcycle industry in Taiwan.
Abstract: High Performance Work Systems (HPWS) generally give rise to positive impacts on employees by increasing their commitments in workplaces. While some argued this actually have considerable negative impacts on employees with increasing possibilities of imposing strains caused by stress and intensity of such work places. Do stressful workplaces hamper employee commitment? The author has tried to find the answer by exploring linkages between HPWS practices and its impact on employees in Japanese organizations. How negative outcomes like job intensity and workplaces and job stressors can influence different forms of employees- commitments which can be a hindrance to their performance. Design: A close ended questionnaire survey was conducted amongst 16 large, medium and small sized Japanese companies from diverse industries around Chiba, Saitama, and Ibaraki Prefectures and in Tokyo from the month of October 2008 to February 2009. Questionnaires were aimed to the non managerial employees- perceptions of HPWS practices, their behavior, working life experiences in their work places. A total of 227 samples are used for analysis in the study. Methods: Correlations, MANCOVA, SEM Path analysis using AMOS software are used for data analysis in this study. Findings: Average non-managerial perception of HPWS adoption is significantly but negatively correlated to both work place Stressors and Continuous commitment, but positively correlated to job Intensity, Affective, Occupational and Normative commitments in different workplaces at Japan. The path analysis by SEM shows significant indirect relationship between Stressors and employee Affective organizational commitment and Normative organizational commitments. Intensity also has a significant indirect effect on Occupational commitments. HPWS has an additive effect on all the outcomes variables. Limitations: The sample size in this study cannot be a representative to the entire population of non-managerial employees in Japan. There were no respondents from automobile, pharmaceuticals, finance industries. The duration of the survey coincided in a period when Japan as most of the other countries is under going recession. Biases could not be ruled out completely. We must take cautions in interpreting the results of studies as they cannot be generalized. And the path analysis cannot provide the complete causality of the inter linkages between the variables used in the study. Originality: There have been limited studies on linkages in HPWS adoptions and their impacts on employees- behaviors and commitments in Japanese workplaces. This study may provide some ingredients for further research in the fields of HRM policies and practices and their linkages on different forms of employees- commitments.
Abstract: This paper proposes the stochastic tabu search (STS)
for improving the measurement scheme for power system state
estimation. If the original measured scheme is not observable, the
additional measurements with minimum number of measurements are
added into the system by STS so that there is no critical measurement
pair. The random bit flipping and bit exchanging perturbations are
used for generating the neighborhood solutions in STS. The Pδ
observable concept is used to determine the network observability.
Test results of 10 bus, IEEE 14 and 30 bus systems are shown that
STS can improve the original measured scheme to be observable
without critical measurement pair. Moreover, the results of STS are
superior to deterministic tabu search (DTS) in terms of the best
solution hit.
Abstract: This paper introduces a technique of distortion
estimation in image watermarking using Genetic Programming (GP).
The distortion is estimated by considering the problem of obtaining a
distorted watermarked signal from the original watermarked signal as
a function regression problem. This function regression problem is
solved using GP, where the original watermarked signal is
considered as an independent variable. GP-based distortion
estimation scheme is checked for Gaussian attack and Jpeg
compression attack. We have used Gaussian attacks of different
strengths by changing the standard deviation. JPEG compression
attack is also varied by adding various distortions. Experimental
results demonstrate that the proposed technique is able to detect the
watermark even in the case of strong distortions and is more robust
against attacks.