Abstract: In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
Abstract: Due to a high unemployment rate among local people
and a high reliance on expatriate workers, the governments in the
Gulf Co-operation Council (GCC) countries have been implementing
programmes of localisation (replacing foreign workers with GCC
nationals). These programmes have been successful in the public
sector but much less so in the private sector. However, there are now
insufficient jobs for locals in the public sector and the onus to provide
employment has fallen on the private sector. This paper is concerned
with a study, which is a work in progress (certain elements are
complete but not the whole study), investigating the effective
implementation of localisation policies in four- and five-star hotels in
the Kingdom of Saudi Arabia (KSA) and the United Arab Emirates
(UAE). The purpose of the paper is to identify the research gap, and
to present the need for the research. Further, it will explain how this
research was conducted.
Studies of localisation in the GCC countries are under-represented
in scholarly literature. Currently, the hotel sectors in KSA and UAE
play an important part in the countries’ economies. However, the
total proportion of Saudis working in the hotel sector in KSA is
slightly under 8%, and in the UAE, the hotel sector remains highly
reliant on expatriates. There is therefore a need for research on
strategies to enhance the implementation of the localisation policies
in general and in the hotel sector in particular.
Further, despite the importance of the hotel sector to their
economies, there remains a dearth of research into the
implementation of localisation policies in this sector. Indeed, as far as
the researchers are aware, there is no study examining localisation in
the hotel sector in KSA, and few in the UAE. This represents a
considerable research gap.
Regarding how the research was carried out, a multiple case study
strategy was used. The four- and five-star hotel sector in KSA is one
of the cases, while the four- and five-star hotel sector in the UAE is
the other case. Four- and five-star hotels in KSA and the UAE were
chosen as these countries have the longest established localisation
policies of all the GCC states and there are more hotels of these
classifications in these countries than in any of the other Gulf
countries. A literature review was carried out to underpin the
research. The empirical data were gathered in three phases. In order
to gain a pre-understanding of the issues pertaining to the research
context, Phase I involved eight unstructured interviews with officials
from the Saudi Commission for Tourism and Antiquities (three
interviewees); the Saudi Human Resources Development Fund (one);
the Abu Dhabi Tourism and Culture Authority (three); and the Abu
Dhabi Development Fund (one).
In Phase II, a questionnaire was administered to 24 managers and
24 employees in four- and five-star hotels in each country to obtain
their beliefs, attitudes, opinions, preferences and practices concerning
localisation.
Unstructured interviews were carried out in Phase III with six
managers in each country in order to allow them to express opinions
that may not have been explored in sufficient depth in the
questionnaire. The interviews in Phases I and III were analysed using
thematic analysis and SPSS will be used to analyse the questionnaire
data.
It is recommended that future research be undertaken on a larger
scale, with a larger sample taken from all over KSA and the UAE
rather than from only four cities (i.e., Riyadh and Jeddah in KSA and
Abu Dhabi and Sharjah in the UAE), as was the case in this research.
Abstract: In the present article, nonlinear vibration analysis of
single layer graphene sheets is presented and the effect of small
length scale is investigated. Using the Hamilton's principle, the three
coupled nonlinear equations of motion are obtained based on the von
Karman geometrical model and Eringen theory of nonlocal
continuum. The solutions of Free nonlinear vibration, based on a one
term mode shape, are found for both simply supported and clamped
graphene sheets. A complete analysis of graphene sheets with
movable as well as immovable in-plane conditions is also carried out.
The results obtained herein are compared with those available in the
literature for classical isotropic rectangular plates and excellent
agreement is seen. Also, the nonlinear effects are presented as
functions of geometric properties and small scale parameter.
Abstract: Eight difference schemes and five limiters are applied to numerical computation of Riemann problem. The resolution of discontinuities of each scheme produced is compared. Numerical dissipation and its estimation are discussed. The result shows that the numerical dissipation of each scheme is vital to improve scheme-s accuracy and stability. MUSCL methodology is an effective approach to increase computational efficiency and resolution. Limiter should be selected appropriately by balancing compressive and diffusive performance.
Abstract: This paper presents the application of a signal intensity
independent similarity criterion for rigid and non-rigid body
registration of binary objects. The criterion is defined as the
weighted ratio image of two images. The ratio is computed on a
voxel per voxel basis and weighting is performed by setting the raios
between signal and background voxels to a standard high value. The
mean squared value of the weighted ratio is computed over the union
of the signal areas of the two images and it is minimized using the
Chebyshev polynomial approximation.
Abstract: This paper introduces two decoders for binary linear
codes based on Metaheuristics. The first one uses a genetic algorithm
and the second is based on a combination genetic algorithm with
a feed forward neural network. The decoder based on the genetic
algorithms (DAG) applied to BCH and convolutional codes give good
performances compared to Chase-2 and Viterbi algorithm respectively
and reach the performances of the OSD-3 for some Residue
Quadratic (RQ) codes. This algorithm is less complex for linear
block codes of large block length; furthermore their performances
can be improved by tuning the decoder-s parameters, in particular the
number of individuals by population and the number of generations.
In the second algorithm, the search space, in contrast to DAG which
was limited to the code word space, now covers the whole binary
vector space. It tries to elude a great number of coding operations
by using a neural network. This reduces greatly the complexity of
the decoder while maintaining comparable performances.
Abstract: It has often been said that the strength of any country
resides in the strength of its industrial sector, and Progress in
industrial society has been accomplished by the creation of new
technologies. Developments have been facilitated by the increasing
availability of advanced manufacturing technology (AMT), in
addition the implementation of advanced manufacturing technology
(AMT) requires careful planning at all levels of the organization to
ensure that the implementation will achieve the intended goals.
Justification and implementation of advanced manufacturing
technology (AMT) involves decisions that are crucial for the
practitioners regarding the survival of business in the present days of
uncertain manufacturing world. This paper assists the industrial
managers to consider all the important criteria for success AMT
implementation, when purchasing new technology. Concurrently,
this paper classifies the tangible benefits of a technology that are
evaluated by addressing both cost and time dimensions, and the
intangible benefits are evaluated by addressing technological,
strategic, social and human issues to identify and create awareness of
the essential elements in the AMT implementation process and
identify the necessary actions before implementing AMT.
Abstract: Fuzzy C-means Clustering algorithm (FCM) is a
method that is frequently used in pattern recognition. It has the
advantage of giving good modeling results in many cases, although,
it is not capable of specifying the number of clusters by itself. In
FCM algorithm most researchers fix weighting exponent (m) to a
conventional value of 2 which might not be the appropriate for all
applications. Consequently, the main objective of this paper is to use
the subtractive clustering algorithm to provide the optimal number of
clusters needed by FCM algorithm by optimizing the parameters of
the subtractive clustering algorithm by an iterative search approach
and then to find an optimal weighting exponent (m) for the FCM
algorithm. In order to get an optimal number of clusters, the iterative
search approach is used to find the optimal single-output Sugenotype
Fuzzy Inference System (FIS) model by optimizing the
parameters of the subtractive clustering algorithm that give minimum
least square error between the actual data and the Sugeno fuzzy
model. Once the number of clusters is optimized, then two
approaches are proposed to optimize the weighting exponent (m) in
the FCM algorithm, namely, the iterative search approach and the
genetic algorithms. The above mentioned approach is tested on the
generated data from the original function and optimal fuzzy models
are obtained with minimum error between the real data and the
obtained fuzzy models.
Abstract: This study aimed at assessing whether and to what extent moral judgment and behaviour were: 1. situation-dependent; 2. selectively dependent on cognitive and affective components; 3. influenced by gender and age; 4. reciprocally congruent. In order to achieve these aims, four different types of moral dilemmas were construed and five types of thinking were presented for each of them – representing five possible ways to evaluate the situation. The judgment criteria included selfishness, altruism, sense of justice, and the conflict between selfishness and the two moral issues. The participants were 250 unpaid volunteers (50% male; 50% female) belonging to two age-groups: young people and adults. The study entailed a 2 (gender) x 2 (age-group) x 5 (type of thinking) x 4 (situation) mixed design: the first two variables were betweensubjects, the others were within-subjects. Results have shown that: 1. moral judgment and behaviour are at least partially affected by the type of situations and by interpersonal variables such as gender and age; 2. moral reasoning depends in a similar manner on cognitive and affective factors; 3. there is not a gender polarity between the ethic of justice and the ethic of cure/ altruism; 4. moral reasoning and behavior are perceived as reciprocally congruent even though their congruence decreases with a more objective assessment. Such results were discussed in the light of contrasting theories on morality.
Abstract: Study of fire and explosion is very important mainly
in oil and gas industries due to several accidents which have been
reported in the past and present. In this work, we have investigated
the flammability of bio oil vapour mixtures. This mixture may
contribute to fire during the storage and transportation process. Bio
oil sample derived from Palm Kernell shell was analysed using Gas
Chromatography Mass Spectrometry (GC-MS) to examine the
composition of the sample. Mole fractions of 12 selected
components in the liquid phase were obtained from the GC-FID data
and used to calculate mole fractions of components in the gas phase
via modified Raoult-s law. Lower Flammability Limits (LFLs) and
Upper Flammability Limits (UFLs) for individual components were
obtained from published literature. However, stoichiometric
concentration method was used to calculate the flammability limits
of some components which their flammability limit values are not
available in the literature. The LFL and UFL values for the mixture
were calculated using the Le Chatelier equation. The LFLmix and
UFLmix values were used to construct a flammability diagram and
subsequently used to determine the flammability of the mixture. The
findings of this study can be used to propose suitable inherently
safer method to prevent the flammable mixture from occurring and
to minimizing the loss of properties, business, and life due to fire
accidents in bio oil productions.
Abstract: Classifying biomedical literature is a difficult and
challenging task, especially when a large number of biomedical
articles should be organized into a hierarchical structure. In this paper,
we present an approach for classifying a collection of biomedical text
abstracts downloaded from Medline database with the help of
ontology alignment. To accomplish our goal, we construct two types
of hierarchies, the OHSUMED disease hierarchy and the Medline
abstract disease hierarchies from the OHSUMED dataset and the
Medline abstracts, respectively. Then, we enrich the OHSUMED
disease hierarchy before adapting it to ontology alignment process for
finding probable concepts or categories. Subsequently, we compute
the cosine similarity between the vector in probable concepts (in the
“enriched" OHSUMED disease hierarchy) and the vector in Medline
abstract disease hierarchies. Finally, we assign category to the new
Medline abstracts based on the similarity score. The results obtained
from the experiments show the performance of our proposed approach
for hierarchical classification is slightly better than the performance of
the multi-class flat classification.
Abstract: Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
Abstract: This paper presents a sensing system for 3D sensing
and mapping by a tracked mobile robot with an arm-type sensor
movable unit and a laser range finder (LRF). The arm-type sensor
movable unit is mounted on the robot and the LRF is installed at the
end of the unit. This system enables the sensor to change position and
orientation so that it avoids occlusions according to terrain by this
mechanism. This sensing system is also able to change the height of
the LRF by keeping its orientation flat for efficient sensing. In this kind
of mapping, it may be difficult for moving robot to apply mapping
algorithms such as the iterative closest point (ICP) because sets of the
2D data at each sensor height may be distant in a common surface. In
order for this kind of mapping, the authors therefore applied
interpolation to generate plausible model data for ICP. The results of
several experiments provided validity of these kinds of sensing and
mapping in this sensing system.
Abstract: This paper considers a scheduling problem in flexible
flow shops environment with the aim of minimizing two important
criteria including makespan and cumulative tardiness of jobs. Since
the proposed problem is known as an Np-hard problem in literature,
we have to develop a meta-heuristic to solve it. We considered
general structure of Genetic Algorithm (GA) and developed a new
version of that based on Data Envelopment Analysis (DEA). Two
objective functions assumed as two different inputs for each Decision
Making Unit (DMU). In this paper we focused on efficiency score of
DMUs and efficient frontier concept in DEA technique. After
introducing the method we defined two different scenarios with
considering two types of mutation operator. Also we provided an
experimental design with some computational results to show the
performance of algorithm. The results show that the algorithm
implements in a reasonable time.
Abstract: Technology assessment is a vital part of decision process in manufacturing, particularly for decisions on selection of new sustainable manufacturing processes. To assess these processes, a matrix approach is introduced and sustainability assessment models are developed. Case studies show that the matrix-based approach provides a flexible and practical way for sustainability evaluation of new manufacturing technologies such as those used in surface coating. The technology assessment of coating processes reveals that compared with powder coating, the sol-gel coating can deliver better technical, economical and environmental sustainability with respect to the selected sustainability evaluation criteria for a decorative coating application of car wheels.
Abstract: Assembly line balancing is a very important issue in
mass production systems due to production cost. Although many
studies have been done on this topic, but because assembly line
balancing problems are so complex they are categorized as NP-hard
problems and researchers strongly recommend using heuristic
methods. This paper presents a new heuristic approach called the
critical task method (CTM) for solving U-shape assembly line
balancing problems. The performance of the proposed heuristic
method is tested by solving a number of test problems and comparing
them with 12 other heuristics available in the literature to confirm the
superior performance of the proposed heuristic. Furthermore, to
prove the efficiency of the proposed CTM, the objectives are
increased to minimize the number of workstation (or equivalently
maximize line efficiency), and minimizing the smoothness index.
Finally, it is proven that the proposed heuristic is more efficient than
the others to solve the U-shape assembly line balancing problem.
Abstract: Due to the coexistence of different Radio Access
Technologies (RATs), Next Generation Wireless Networks (NGWN)
are predicted to be heterogeneous in nature. The coexistence of
different RATs requires a need for Common Radio Resource
Management (CRRM) to support the provision of Quality of Service
(QoS) and the efficient utilization of radio resources. RAT selection
algorithms are part of the CRRM algorithms. Simply, their role is to
verify if an incoming call will be suitable to fit into a heterogeneous
wireless network, and to decide which of the available RATs is most
suitable to fit the need of the incoming call and admit it.
Guaranteeing the requirements of QoS for all accepted calls and at
the same time being able to provide the most efficient utilization of
the available radio resources is the goal of RAT selection algorithm.
The normal call admission control algorithms are designed for
homogeneous wireless networks and they do not provide a solution
to fit a heterogeneous wireless network which represents the NGWN.
Therefore, there is a need to develop RAT selection algorithm for
heterogeneous wireless network. In this paper, we propose an
approach for RAT selection which includes receiving different
criteria, assessing and making decisions, then selecting the most
suitable RAT for incoming calls. A comprehensive survey of
different RAT selection algorithms for a heterogeneous wireless
network is studied.
Abstract: Linearization of graph embedding has been emerged
as an effective dimensionality reduction technique in pattern
recognition. However, it may not be optimal for nonlinearly
distributed real world data, such as face, due to its linear nature. So, a
kernelization of graph embedding is proposed as a dimensionality
reduction technique in face recognition. In order to further boost the
recognition capability of the proposed technique, the Fisher-s
criterion is opted in the objective function for better data
discrimination. The proposed technique is able to characterize the
underlying intra-class structure as well as the inter-class separability.
Experimental results on FRGC database validate the effectiveness of
the proposed technique as a feature descriptor.
Abstract: The storage of thermal energy as a latent heat of phase
change material (PCM) has created considerable interest among
researchers in recent times. Here, an attempt is made to carry out
numerical investigations to analyze the performance of latent heat
storage units (LHSU) employing phase change material. The
mathematical model developed is based on an enthalpy formulation.
Freezing time of PCM packed in three different shaped containers
viz. rectangular, cylindrical and cylindrical shell is compared. The
model is validated with the results available in the literature. Results
show that for the same mass of PCM and surface area of heat
transfer, cylindrical shell container takes the least time for freezing
the PCM and this geometric effect is more pronounced with an
increase in the thickness of the shell than that of length of the shell.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.