Abstract: In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.
Abstract: A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Abstract: Economic Load Dispatch (ELD) is a method of determining
the most efficient, low-cost and reliable operation of a power
system by dispatching available electricity generation resources to
supply load on the system. The primary objective of economic
dispatch is to minimize total cost of generation while honoring
operational constraints of available generation resources. In this paper
an intelligent water drop (IWD) algorithm has been proposed to
solve ELD problem with an objective of minimizing the total cost of
generation. Intelligent water drop algorithm is a swarm-based natureinspired
optimization algorithm, which has been inspired from natural
rivers. A natural river often finds good paths among lots of possible
paths in its ways from source to destination and finally find almost
optimal path to their destination. These ideas are embedded into
the proposed algorithm for solving economic load dispatch problem.
The main advantage of the proposed technique is easy is implement
and capable of finding feasible near global optimal solution with
less computational effort. In order to illustrate the effectiveness of
the proposed method, it has been tested on 6-unit and 20-unit test
systems with incremental fuel cost functions taking into account the
valve point-point loading effects. Numerical results shows that the
proposed method has good convergence property and better in quality
of solution than other algorithms reported in recent literature.
Abstract: We have developed an analytic model for the radial pn-junction in a nanowire (NW) core-shell structure utilizing as a new
building block in different semiconductor devices. The potential distribution through the p-n-junction is calculated and the analytical expressions are derived to compute the depletion region widths. We
show that the widths of space charge layers, surrounding the core, are
the functions of core radius, which is the manifestation of so called classical size effect. The relationship between the depletion layer width and the built-in potential in the asymptotes of infinitely large
core radius transforms to square-root dependence specific for conventional planar p-n-junctions. The explicit equation is derived to
compute the capacitance of radial p-n-junction. The current-voltage behavior is also carefully determined taking into account the “short
base" effects.
Abstract: In this paper an ant colony optimization algorithm is
developed to solve the permutation flow shop scheduling problem. In
the permutation flow shop scheduling problem which has been vastly
studied in the literature, there are a set of m machines and a set of n
jobs. All the jobs are processed on all the machines and the sequence
of jobs being processed is the same on all the machines. Here this
problem is optimized considering two criteria, makespan and total
flow time. Then the results are compared with the ones obtained by
previously developed algorithms. Finally it is visible that our
proposed approach performs best among all other algorithms in the
literature.
Abstract: There are many problems associated with the World Wide
Web: getting lost in the hyperspace; the web content is still accessible only
to humans and difficulties of web administration. The solution to these
problems is the Semantic Web which is considered to be the extension
for the current web presents information in both human readable and
machine processable form. The aim of this study is to reach new
generic foundation architecture for the Semantic Web because there
is no clear architecture for it, there are four versions, but still up to
now there is no agreement for one of these versions nor is there a
clear picture for the relation between different layers and
technologies inside this architecture. This can be done depending on
the idea of previous versions as well as Gerber-s evaluation method
as a step toward an agreement for one Semantic Web architecture.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.
Abstract: Educational institutions are increasingly exploring the affordances of 3D virtual worlds for instruction and research, but few studies have been done to document current practices and uses of this emerging technology. This observational survey examines the virtual presences of 170 accredited educational institutions found in one such 3D virtual world called Second Life®, created by San- Francisco based Linden Lab®. The study focuses on what educational institutions look like in this virtual environment, the types of spaces educational institutions are creating or simulating, and what types of activities are being conducted.
Abstract: This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Abstract: Customarily, the LMTD correction factor, FT, is used
to screen alternative designs for a heat exchanger. Designs with
unacceptably low FT values are discarded. In this paper, authors have
proposed a more fundamental criterion, based on feasibility of a
multipass exchanger as the only criteria, followed by economic
optimization. This criterion, coupled with asymptotic energy targets,
provide the complete optimization space in a heat exchanger network
(HEN), where cost-optimization of HEN can be performed with only
Heat Recovery Approach temperature (HRAT) and number-of-shells
as variables.
Abstract: Surface sediment samples were collected from the
Canon River mouth, Taiwan and analyzed for polycyclic aromatic
hydrocarbons (PAHs). Total PAHs concentrations varied from 337 to
1,252 ng/g dry weight, with a mean concentration of 827 ng/g dry
weight. The spatial distribution of PAHs reveals that the PAHs
concentration is relatively high in the river mouth region, and
gradually diminishes toward the harbor region. Diagnostic ratios
showed that the possible source of PAHs in the Canon River mouth
could be petroleum combustion. The toxic equivalent concentrations
(TEQcarc) of PAHs varied from 47 to 112 ng TEQ/g dry weight. Higher
total TEQcarc values were found in the river mouth region. As
compared with the US Sediment Quality Guidelines (SQGs), the
observed levels of PAHs at Canon River mouth were lower than the
effects range low (ERL), and would probably not exert adverse
biological effects.
Abstract: In this study, we used shape memory alloys as
actuators to build a biomorphic robot which can imitate the motion of
an earthworm. The robot can be used to explore in a narrow space.
Therefore we chose shape memory alloys as actuators. Because of the
small deformation of a wire shape memory alloy, spiral shape memory
alloys are selected and installed both on the X axis and Y axis (each
axis having two shape memory alloys) to enable the biomorphic robot
to do reciprocating motion. By the mechanism we designed, the robot
can increase the distance as it moves in a duty cycle. In addition, two
shape memory alloys are added to the robot head for controlling right
and left turns. By sending pulses through the I/O card from the
controller, the signals are then amplified by a driver to heat the shape
memory alloys in order to make the SMA shrink to pull the mechanism
to move.
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: Clusters of microcalcifications in mammograms are an
important sign of breast cancer. This paper presents a complete
Computer Aided Detection (CAD) scheme for automatic detection of
clustered microcalcifications in digital mammograms. The proposed
system, MammoScan μCaD, consists of three main steps. Firstly
all potential microcalcifications are detected using a a method for
feature extraction, VarMet, and adaptive thresholding. This will also
give a number of false detections. The goal of the second step,
Classifier level 1, is to remove everything but microcalcifications.
The last step, Classifier level 2, uses learned dictionaries and sparse
representations as a texture classification technique to distinguish
single, benign microcalcifications from clustered microcalcifications,
in addition to remove some remaining false detections. The system
is trained and tested on true digital data from Stavanger University
Hospital, and the results are evaluated by radiologists. The overall
results are promising, with a sensitivity > 90 % and a low false
detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).
Abstract: The main aim of this study is to describe and introduce a method of numerical analysis in obtaining approximate solutions for the SIR-SI differential equations (susceptible-infectiverecovered for human populations; susceptible-infective for vector populations) that represent a model for dengue disease transmission. Firstly, we describe the ordinary differential equations for the SIR-SI disease transmission models. Then, we introduce the numerical analysis of solutions of this continuous time, discrete space SIR-SI model by simplifying the continuous time scale to a densely populated, discrete time scale. This is followed by the application of this numerical analysis of solutions of the SIR-SI differential equations to the estimation of relative risk using continuous time, discrete space dengue data of Kuala Lumpur, Malaysia. Finally, we present the results of the analysis, comparing and displaying the results in graphs, table and maps. Results of the numerical analysis of solutions that we implemented offers a useful and potentially superior model for estimating relative risks based on continuous time, discrete space data for vector borne infectious diseases specifically for dengue disease.
Abstract: The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.
Abstract: Our work is part of the heterogeneous data
integration, with the definition of a structural and semantic mediation
model. Our aim is to propose architecture for the heterogeneous
sources metadata mediation, represented by XML, RDF and RuleML
models, providing to the user the metadata transparency. This, by
including data structures, of natures fundamentally different, and
allowing the decomposition of a query involving multiple sources, to
queries specific to these sources, then recompose the result.
Abstract: The understanding of the system level of biological behavior and phenomenon variously needs some elements such as gene sequence, protein structure, gene functions and metabolic pathways. Challenging problems are representing, learning and reasoning about these biochemical reactions, gene and protein structure, genotype and relation between the phenotype, and expression system on those interactions. The goal of our work is to understand the behaviors of the interactions networks and to model their evolution in time and in space. We propose in this study an ontological meta-model for the knowledge representation of the genetic regulatory networks. Ontology in artificial intelligence means the fundamental categories and relations that provide a framework for knowledge models. Domain ontology's are now commonly used to enable heterogeneous information resources, such as knowledge-based systems, to communicate with each other. The interest of our model is to represent the spatial, temporal and spatio-temporal knowledge. We validated our propositions in the genetic regulatory network of the Aarbidosis thaliana flower
Abstract: With the explosive growth of data available on the
Internet, personalization of this information space become a
necessity. At present time with the rapid increasing popularity of the
WWW, Websites are playing a crucial role to convey knowledge and
information to the end users. Discovering hidden and meaningful
information about Web users usage patterns is critical to determine
effective marketing strategies to optimize the Web server usage for
accommodating future growth. The task of mining useful information
becomes more challenging when the Web traffic volume is enormous
and keeps on growing. In this paper, we propose a intelligent model
to discover and analyze useful knowledge from the available Web
log data.