Abstract: In this paper we compare the accuracy of data mining
methods to classifying students in order to predicting student-s class
grade. These predictions are more useful for identifying weak
students and assisting management to take remedial measures at early
stages to produce excellent graduate that will graduate at least with
second class upper. Firstly we examine single classifiers accuracy on
our data set and choose the best one and then ensembles it with a
weak classifier to produce simple voting method. We present results
show that combining different classifiers outperformed other single
classifiers for predicting student performance.
Abstract: In this paper, the effects of radiation, chemical
reaction and double dispersion on mixed convection heat and mass
transfer along a semi vertical plate are considered. The plate is
embedded in a Newtonian fluid saturated non - Darcy (Forchheimer
flow model) porous medium. The Forchheimer extension and first
order chemical reaction are considered in the flow equations. The
governing sets of partial differential equations are nondimensionalized
and reduced to a set of ordinary differential
equations which are then solved numerically by Fourth order Runge–
Kutta method. Numerical results for the detail of the velocity,
temperature, and concentration profiles as well as heat transfer rates
(Nusselt number) and mass transfer rates (Sherwood number) against
various parameters are presented in graphs. The obtained results are
checked against previously published work for special cases of the
problem and are found to be in good agreement.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: Due to availability of powerful image processing software
and improvement of human computer knowledge, it becomes
easy to tamper images. Manipulation of digital images in different
fields like court of law and medical imaging create a serious problem
nowadays. Copy-move forgery is one of the most common types
of forgery which copies some part of the image and pastes it to
another part of the same image to cover an important scene. In
this paper, a copy-move forgery detection method proposed based
on Fourier transform to detect forgeries. Firstly, image is divided to
same size blocks and Fourier transform is performed on each block.
Similarity in the Fourier transform between different blocks provides
an indication of the copy-move operation. The experimental results
prove that the proposed method works on reasonable time and works
well for gray scale and colour images. Computational complexity
reduced by using Fourier transform in this method.
Abstract: The primary objective of this paper was to construct a
“kinematic parameter-independent modeling of three-axis machine
tools for geometric error measurement" technique. Improving the
accuracy of the geometric error for three-axis machine tools is one of
the machine tools- core techniques. This paper first applied the
traditional method of HTM to deduce the geometric error model for
three-axis machine tools. This geometric error model was related to the
three-axis kinematic parameters where the overall errors was relative
to the machine reference coordinate system. Given that the
measurement of the linear axis in this model should be on the ideal
motion axis, there were practical difficulties. Through a measurement
method consolidating translational errors and rotational errors in the
geometric error model, we simplified the three-axis geometric error
model to a kinematic parameter-independent model. Finally, based on
the new measurement method corresponding to this error model, we
established a truly practical and more accurate error measuring
technique for three-axis machine tools.
Abstract: The present investigation was aimed to develop methodology for the standardization of Marichyadi Vati and its raw materials. Standardization was carried using systematic Pharmacognostical and physicochemical parameters as per WHO guidelines. The detailed standardization of Marichyadi Vati, it is concluded that there are no major differences prevailed in the quality of marketed products and laboratory samples of Marichyadi Vati. However, market samples showed slightly better amount of Piperine than the laboratory sample by both methods. This is the first attempt to generate complete set of standards required for the Marichyadi Vati.
Abstract: MRAM technology provides a combination of fast
access time, non-volatility, data retention and endurance. While a
growing interest is given to two-terminal Magnetic Tunnel Junctions
(MTJ) based on Spin-Transfer Torque (STT) switching as the
potential candidate for a universal memory, its reliability is
dramatically decreased because of the common writing/reading path.
Three-terminal MTJ based on Spin-Orbit Torque (SOT) approach
revitalizes the hope of an ideal MRAM. It can overcome the
reliability barrier encountered in current two-terminal MTJs by
separating the reading and the writing path. In this paper, we study
two possible writing schemes for the SOT-MTJ device based on
recently fabricated samples. While the first is based on precessional
switching, the second requires the presence of permanent magnetic
field. Based on an accurate Verilog-A model, we simulate the two
writing techniques and we highlight advantages and drawbacks of
each one. Using the second technique, pioneering logic circuits based
on the three-terminal architecture of the SOT-MTJ described in this
work are under development with preliminary attractive results.
Abstract: This is the first report from India on a beverage resulting from alcoholic fermentation of the juice of sea buckthorn (Hippophae rhamnoides L) using lab isolated yeast strain. The health promoting potential of the product was evaluated based on its total phenolic content. The most important finding was that under the present fermentation condition, the total phenolic content of the wine product was 689 mg GAE/L. Investigation of influence of bottle ageing on the sea buckthorn wine showed a slight decrease in the phenolic content (534 m mg GAE/L). This study also includes the comparative analysis of the phenolic content of wines from other selected fruit juices like grape, apple and black currant. KeywordsAlcoholic fermentation, Hippophae, Total phenolic content, Wine
Abstract: The first and basic cause of the failure of concrete is repeated freezing (thawing) of moisture contained in the pores, microcracks, and cavities of the concrete. On transition to ice, water existing in the free state in cracks increases in volume, expanding the recess in which freezing occurs. A reduction in strength below the initial value is to be expected and further cycle of freezing and thawing have a further marked effect. By using some experimental parameters like nuclear magnetic resonance variation (NMR), enthalpy-temperature (or heat capacity) variation, we can resolve between the various water states and their effect on concrete properties during cooling through the freezing transition temperature range. The main objective of this paper is to describe the principal type of water responsible for the reduction in strength and structural damage (frost damage) of concrete following repeated freeze –thaw cycles. Some experimental work was carried out at the institute of cryogenics to determine what happens to water in concrete during the freezing transition.
Abstract: A new target detection technique is presented in this
paper for the identification of small boats in coastal surveillance. The
proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any
objects present in the scene from the background. The preprocessing
step results in an image having only the foreground objects, such as
boats, trees and other cluttered regions, and hence reduces the search
region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform
correlator (SPFJTC) technique which produces single and delta-like
correlation peak for a potential target present in the input scene. A
post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the
proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.
Abstract: Abdominal aortic aneurysms rupture (AAAs) is one of the main causes of death in the world. This is a very complex phenomenon that usually occurs “without previous warning". Currently, criteria to assess the aneurysm rupture risk (peak diameter and growth rate) can not be considered as reliable indicators. In a first approach, the main geometric parameters of aneurysms have been linked into five biomechanical factors. These are combined to obtain a dimensionless rupture risk index, RI(t), which has been validated preliminarily with a clinical case and others from literature. This quantitative indicator is easy to understand, it allows estimating the aneurysms rupture risks and it is expected to be able to identify the one in aneurysm whose peak diameter is less than the threshold value. Based on initial results, a broader study has begun with twelve patients from the Clinic Hospital of Valladolid-Spain, which are submitted to periodic follow-up examinations.
Abstract: In the present study, the effect of ferrous sulfate concentration and total solids on bioleaching of heavy metals from sewage sludge has been examined using indigenous iron-oxidizing microorganisms. The experiments on effects of ferrous sulfate concentrations on bioleaching were carried out using ferrous sulfate of different concentrations (5-20 g L-1) to optimize the concentration of ferrous sulfate for maximum bioleaching. A rapid change in the pH and ORP took place in first 2 days followed by a slow change till 16th day in all the sludge samples. A 10 g L-1 ferrous sulfate concentration was found to be sufficient in metal bioleaching in the following order: Zn: 69%>Cu: 52%>Cr: 46%>Ni: 45. Further, bioleaching using 10 g/L ferrous sulfate was found to be efficient up to 20 g L-1 sludge solids concentration. The results of the present study strongly indicate that using 10 g L-1 ferrous sulfate indigenous iron-oxidizing microorganisms can bring down pH to a value needed for significant metal solubilization.
Abstract: Fundamental sensor-motor couplings form the backbone
of most mobile robot control tasks, and often need to be implemented
fast, efficiently and nevertheless reliably. Machine learning
techniques are therefore often used to obtain the desired sensor-motor
competences.
In this paper we present an alternative to established machine
learning methods such as artificial neural networks, that is very fast,
easy to implement, and has the distinct advantage that it generates
transparent, analysable sensor-motor couplings: system identification
through nonlinear polynomial mapping.
This work, which is part of the RobotMODIC project at the
universities of Essex and Sheffield, aims to develop a theoretical understanding
of the interaction between the robot and its environment.
One of the purposes of this research is to enable the principled design
of robot control programs.
As a first step towards this aim we model the behaviour of the
robot, as this emerges from its interaction with the environment, with
the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving
Average models with eXogenous inputs). This method produces
explicit polynomial functions that can be subsequently analysed using
established mathematical methods.
In this paper we demonstrate the fidelity of the obtained NARMAX
models in the challenging task of robot route learning; we present a
set of experiments in which a Magellan Pro mobile robot was taught
to follow four different routes, always using the same mechanism to
obtain the required control law.
Abstract: The objective from this paper is to design a solar
thermal engine for space vehicles orbital control and electricity
generation. A computational model is developed for the prediction of
the solar thermal engine performance for different design parameters and conditions in order to enhance the engine efficiency. The engine is divided into two main subsystems. First, the concentrator dish
which receives solar energy from the sun and reflects them to the
cavity receiver. The second one is the cavity receiver which receives
the heat flux reflected from the concentrator and transfers heat to the
fluid passing over. Other subsystems depend on the application required from the engine. For thrust application, a nozzle is
introduced to the system for the fluid to expand and produce thrust.
Hydrogen is preferred as a working fluid in the thruster application.
Results model developed is used to determine the thrust for a
concentrator dish 4 meters in diameter (provides 10 kW of energy),
focusing solar energy to a 10 cm aperture diameter cavity receiver.
The cavity receiver outer length is 50 cm and the internal cavity is 47
cm in length. The suggested design material of the internal cavity is
tungsten to withstand high temperature. The thermal model and
analysis shows that the hydrogen temperature at the plenum reaches
2000oK after about 250 seconds for hot start operation for a flow rate
of 0.1 g/sec.Using solar thermal engine as an electricity generation
device on earth is also discussed. In this case a compressor and
turbine are used to convert the heat gained by the working fluid (air)
into mechanical power. This mechanical power can be converted into
electrical power by using a generator.
Abstract: Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: The overall objective of this research is a strain
improvement technology for efficient pectinase production. A novel
cells cultivation technology by immobilization of fungal cells has
been studied in long time continuous fermentations. Immobilization
was achieved by using of new material for absorption of stores of
immobilized cultures which was for the first time used for
immobilization of microorganisms. Effects of various conditions of
nitrogen and carbon nutrition on the biosynthesis of pectolytic
enzymes in Aspergillus awamori 1-8 strain were studied. Proposed
cultivation technology along with optimization of media components
for pectinase overproduction led to increased pectinase productivity
in Aspergillus awamori 1-8 from 7 to 8 times. Proposed technology
can be applied successfully for production of major industrial
enzymes such as α-amylase, protease, collagenase etc.
Abstract: In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Abstract: The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Abstract: The purpose of this study is to investigate the effects
of modality principles in instructional software among first grade
pupils- achievements in the learning of Arabic Language. Two modes
of instructional software were systematically designed and
developed, audio with images (AI), and text with images (TI). The
quasi-experimental design was used in the study. The sample
consisted of 123 male and female pupils from IRBED Education
Directorate, Jordan. The pupils were randomly assigned to any one of
the two modes. The independent variable comprised the two modes
of the instructional software, the students- achievement levels in the
Arabic Language class and gender. The dependent variable was the
achievements of the pupils in the Arabic Language test. The
theoretical framework of this study was based on Mayer-s Cognitive
Theory of Multimedia Learning. Four hypotheses were postulated
and tested. Analyses of Variance (ANOVA) showed that pupils using
the (AI) mode performed significantly better than those using (TI)
mode. This study concluded that the audio with images mode was an
important aid to learning as compared to text with images mode.