Abstract: Today, the preferences and participation of the TD groups such as the elderly and disabled is still lacking in decision-making of transportation planning, and their reactions to certain type of policies are not well known. Thus, a clear methodology is needed. This study aimed to develop a method to extract the preferences of the disabled to be used in the policy-making stage that can also guide to future estimations. The method utilizes the combination of cluster analysis and data filtering using the data of the Arao city (Japan). The method is a process that follows: defining the TD group by the cluster analysis tool, their travel preferences in tabular form from the household surveys by policy variableimpact pairs, zones, and by trip purposes, and the final outcome is the preference probabilities of the disabled. The preferences vary by trip purpose; for the work trips, accessibility and transit system quality policies with the accompanying impacts of modal shifts towards public mode use as well as the decreasing travel costs, and the trip rate increase; for the social trips, the same accessibility and transit system policies leading to the same mode shift impact, together with the travel quality policy area leading to trip rate increase. These results explain the policies to focus and can be used in scenario generation in models, or any other planning purpose as decision support tool.
Abstract: A topologically oriented neural network is very
efficient for real-time path planning for a mobile robot in changing
environments. When using a recurrent neural network for this
purpose and with the combination of the partial differential equation
of heat transfer and the distributed potential concept of the network,
the problem of obstacle avoidance of trajectory planning for a
moving robot can be efficiently solved. The related dimensional
network represents the state variables and the topology of the robot's
working space. In this paper two approaches to problem solution are
proposed. The first approach relies on the potential distribution of
attraction distributed around the moving target, acting as a unique
local extreme in the net, with the gradient of the state variables
directing the current flow toward the source of the potential heat. The
second approach considers two attractive and repulsive potential
sources to decrease the time of potential distribution. Computer
simulations have been carried out to interrogate the performance of
the proposed approaches.
Abstract: The evolutionary design of electronic circuits, or
evolvable hardware, is a discipline that allows the user to
automatically obtain the desired circuit design. The circuit
configuration is under the control of evolutionary algorithms. Several
researchers have used evolvable hardware to design electrical
circuits. Every time that one particular algorithm is selected to carry
out the evolution, it is necessary that all its parameters, such as
mutation rate, population size, selection mechanisms etc. are tuned in
order to achieve the best results during the evolution process. This
paper investigates the abilities of evolution strategy to evolve digital
logic circuits based on programmable logic array structures when
different mutation rates are used. Several mutation rates (fixed and
variable) are analyzed and compared with each other to outline the
most appropriate choice to be used during the evolution of
combinational logic circuits. The experimental results outlined in this
paper are important as they could be used by every researcher who
might need to use the evolutionary algorithm to design digital logic
circuits.
Abstract: In order to give high expertise the computer aided
design of mechanical systems involves specific activities focused on
processing two type of information: knowledge and data. Expert rule
based knowledge is generally processing qualitative information and
involves searching for proper solutions and their combination into
synthetic variant. Data processing is based on computational models
and it is supposed to be inter-related with reasoning in the knowledge
processing. In this paper an Intelligent Integrated System is proposed,
for the objective of choosing the adequate material. The software is
developed in Prolog – Flex software and takes into account various
constraints that appear in the accurate operation of gears.
Abstract: In this paper we compare the accuracy of data mining
methods to classifying students in order to predicting student-s class
grade. These predictions are more useful for identifying weak
students and assisting management to take remedial measures at early
stages to produce excellent graduate that will graduate at least with
second class upper. Firstly we examine single classifiers accuracy on
our data set and choose the best one and then ensembles it with a
weak classifier to produce simple voting method. We present results
show that combining different classifiers outperformed other single
classifiers for predicting student performance.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: Absorptive characteristics of polyaniline synthesized
in mixture of water and acetonitrile in 50/50 volume ratio was
studied. Synthesized polyaniline in powder shape is used as an
adsorbent to remove toxic hexavalent chromium from aqueous
solutions. Experiments were conducted in batch mode with different
variables such as agitation time, solution pH and initial concentration
of hexavalent chromium. Removal mechanism is the combination of
surface adsorption and reduction. The equilibrium time for removal
of Cr(T) and Cr(VI) was about 2 and 10 minutes respectively. The
optimum pH for total chromium removal occurred at pH 7 and
maximum hexavalent chromium removal took place under acidic
condition at pH 3. Investigating the isothermal characteristics showed
that the equilibrium adsorption data fitted both Freundlich-s and
Langmuir-s isotherms. The maximum adsorption of chromium was
calculated 36.1 mg/g for polyaniline
Abstract: The main goal of the study is to analyze all relevant properties of the electro hydraulic systems and based on that to make a proper choice of the neural network control strategy that may be used for the control of the mechatronic system. A combination of electronic and hydraulic systems is widely used since it combines the advantages of both. Hydraulic systems are widely spread because of their properties as accuracy, flexibility, high horsepower-to-weight ratio, fast starting, stopping and reversal with smoothness and precision, and simplicity of operations. On the other hand, the modern control of hydraulic systems is based on control of the circuit fed to the inductive solenoid that controls the position of the hydraulic valve. Since this circuit may be easily handled by PWM (Pulse Width Modulation) signal with a proper frequency, the combination of electrical and hydraulic systems became very fruitful and usable in specific areas as airplane and military industry. The study shows and discusses the experimental results obtained by the control strategy of neural network control using MATLAB and SIMULINK [1]. Finally, the special attention was paid to the possibility of neuro-controller design and its application to control of electro-hydraulic systems and to make comparative with other kinds of control.
Abstract: MRAM technology provides a combination of fast
access time, non-volatility, data retention and endurance. While a
growing interest is given to two-terminal Magnetic Tunnel Junctions
(MTJ) based on Spin-Transfer Torque (STT) switching as the
potential candidate for a universal memory, its reliability is
dramatically decreased because of the common writing/reading path.
Three-terminal MTJ based on Spin-Orbit Torque (SOT) approach
revitalizes the hope of an ideal MRAM. It can overcome the
reliability barrier encountered in current two-terminal MTJs by
separating the reading and the writing path. In this paper, we study
two possible writing schemes for the SOT-MTJ device based on
recently fabricated samples. While the first is based on precessional
switching, the second requires the presence of permanent magnetic
field. Based on an accurate Verilog-A model, we simulate the two
writing techniques and we highlight advantages and drawbacks of
each one. Using the second technique, pioneering logic circuits based
on the three-terminal architecture of the SOT-MTJ described in this
work are under development with preliminary attractive results.
Abstract: Oxidative stress is considered to be the cause for onset
and the progression of type 2 diabetes mellitus (T2DM) and
complications including neuropathy. It is a deleterious process that
can be an important mediator of damage to cell structures: protein,
lipids and DNA. Data suggest that in patients with diabetes and
diabetic neuropathy DNA repair is impaired, which prevents effective
removal of lesions. Objective: The aim of our study was to evaluate
the association of the hOGG1 (326 Ser/Cys) and XRCC1 (194
Arg/Trp, 399 Arg/Gln) gene polymorphisms whose protein is
involved in the BER pathway with DNA repair efficiency in patients
with diabetes type 2 and diabetic neuropathy compared to the healthy
subjects. Genotypes were determined by PCR-RFLP analysis in 385
subjects, including 117 with type 2 diabetes, 56 with diabetic
neuropathy and 212 with normal glucose metabolism. The
polymorphisms studied include codon 326 of hOGG1 and 194, 399
of XRCC1 in the base excision repair (BER) genes. Comet assay was
carried out using peripheral blood lymphocytes from the patients and
controls. This test enabled the evaluation of DNA damage in cells
exposed to hydrogen peroxide alone and in the combination with the
endonuclease III (Nth). The results of the analysis of polymorphism
were statistically examination by calculating the odds ratio (OR) and
their 95% confidence intervals (95% CI) using the ¤ç2-tests. Our data
indicate that patients with diabetes mellitus type 2 (including those
with neuropathy) had higher frequencies of the XRCC1 399Arg/Gln
polymorphism in homozygote (GG) (OR: 1.85 [95% CI: 1.07-3.22],
P=0.3) and also increased frequency of 399Gln (G) allele (OR: 1.38
[95% CI: 1.03-1.83], P=0.3). No relation to other polymorphisms
with increased risk of diabetes or diabetic neuropathy. In T2DM
patients complicated by neuropathy, there was less efficient repair of
oxidative DNA damage induced by hydrogen peroxide in both the
presence and absence of the Nth enzyme. The results of our study
suggest that the XRCC1 399 Arg/Gln polymorphism is a significant
risk factor of T2DM in Polish population. Obtained data suggest a
decreased efficiency of DNA repair in cells from patients with
diabetes and neuropathy may be associated with oxidative stress.
Additionally, patients with neuropathy are characterized by even
greater sensitivity to oxidative damage than patients with diabetes,
which suggests participation of free radicals in the pathogenesis of
neuropathy.
Abstract: Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.
Abstract: In blended learning environments, the Internet can be combined with other technologies. The aim of this research was to design, introduce and validate a model to support synchronous and asynchronous activities by managing content domains in an Adaptive Hypermedia System (AHS). The application is based on information recovery techniques, clustering algorithms and adaptation rules to adjust the user's model to contents and objects of study. This system was applied to blended learning in higher education. The research strategy used was the case study method. Empirical studies were carried out on courses at two universities to validate the model. The results of this research show that the model had a positive effect on the learning process. The students indicated that the synchronous and asynchronous scenario is a good option, as it involves a combination of work with the lecturer and the AHS. In addition, they gave positive ratings to the system and stated that the contents were adapted to each user profile.
Abstract: In this paper the strength of adhesive joint under
tension and bending is discussed on the basis of intensity of
singular stress by the application of FEM. A useful method is
presented with focusing on the stress at the edge of interface
between the adhesive and adherent obtained by FEM. After
analyzing the adhesive joint strength with all material
combinations, it is found that to improve the interface strength,
thin adhesive layers are desirable because the intensity of
singular stress decreases with decreasing the thickness.
Abstract: This paper presents a study of the Taguchi design
application to optimize surface quality in damper inserted end milling
operation. Maintaining good surface quality usually involves
additional manufacturing cost or loss of productivity. The Taguchi
design is an efficient and effective experimental method in which a
response variable can be optimized, given various factors, using
fewer resources than a factorial design. This Study included spindle
speed, feed rate, and depth of cut as control factors, usage of different
tools in the same specification, which introduced tool condition and
dimensional variability. An orthogonal array of L9(3^4)was used;
ANOVA analyses were carried out to identify the significant factors
affecting surface roughness, and the optimal cutting combination was
determined by seeking the best surface roughness (response) and
signal-to-noise ratio. Finally, confirmation tests verified that the
Taguchi design was successful in optimizing milling parameters for
surface roughness.
Abstract: We measured the major and trace element contents
and Rb-Sr isotopic compositions of 12 tektites from the Maoming
area, Guandong province (south China). All the samples studied are
splash-form tektites which show pitted or grooved surfaces with
schlieren structures on some surfaces. The trace element ratios Ba/Rb
(avg. 4.33), Th/Sm (avg. 2.31), Sm/Sc (avg. 0.44), Th/Sc (avg. 1.01) ,
La/Sc (avg. 2.86), Th/U (avg. 7.47), Zr/Hf (avg. 46.01) and the rare
earth elements (REE) contents of tektites of this study are similar to the
average upper continental crust. From the chemical composition, it is
suggested that tektites in this study are derived from similar parental
terrestrial sedimentary deposit which may be related to post-Archean
upper crustal rocks. The tektites from the Maoming area have high
positive εSr(0) values-ranging from 176.9~190.5 which indicate that
the parental material for these tektites have similar Sr isotopic
compositions to old terrestrial sedimentary rocks and they were not
dominantly derived from recent young sediments (such as soil or
loess). The Sr isotopic data obtained by the present study support the
conclusion proposed by Blum et al. (1992)[1] that the depositional age
of sedimentary target materials is close to 170Ma (Jurassic). Mixing
calculations based on the model proposed by Ho and Chen (1996)[2]
for various amounts and combinations of target rocks indicate that the
best fit for tektites from the Maoming area is a mixture of 40% shale,
30% greywacke, 30% quartzite.
Abstract: A combination of photosynthetic bacteria along with
anaerobic acidogenic bacteria is an ideal option for efficient
hydrogen production. In the present study, the optimum
concentration of substrates for the growth of Rhodobacter
sphaeroides was found by response surface methodology. The
optimum combination of three individual fatty acids was determined
by Box Behnken design. Increase of volatile fatty acid concentration
decreased the growth. Combination of sodium acetate and sodium
propionate was most significant for the growth of the organism. The
results showed that a maximum biomass concentration of 0.916 g/l
was obtained when the concentrations of acetate, propionate and
butyrate were 0.73g/l,0.99g/l and 0.799g/l, respectively. The growth
was studied under an optimum concentration of volatile fatty acids
and at a light intensity of 3000 lux, initial pH of 7 and a temperature
of 35°C.The maximum biomass concentration of 0.92g/l was
obtained which verified the practicability of this optimization.
Abstract: The effects of coatings based on sodium alginate (S.A) and carboxyl methyl cellulose (CMC) on the color and moisture characteristics of potato round slices were investigated. It is the first time that this combination of polysaccharides is used as edible coating which alone had the best performance as inhibitor of potato color discoloration during the storage of 15 days at 4oC. When ascorbic acid (AA) and green tea (GT) were added in the above edible coating its effects on potato round slices changed. The mixtures of sodium alginate and carboxyl methyl cellulose with ascorbic acid or with green tea behave as a potential moisture barrier, resulting to the extent of potato samples self–life. These data suggests that both GT and AA are potential inhibitors of dehydration in potatoes and not only natural antioxidants.
Abstract: This paper may be considered as combination of both pervasive computing and Differential GPS (global positioning satellite) which relates to control automatic traffic signals in such a
way as to pre-empt normal signal operation and permit lifesaving vehicles. Before knowing the arrival of the lifesaving vehicles from
the signal there is a chance of clearing the traffic. Traffic signal
preemption system includes a vehicle equipped with onboard computer system capable of capturing diagnostic information and
estimated location of the lifesaving vehicle using the information provided by GPS receiver connected to the onboard computer system
and transmitting the information-s using a wireless transmitter via a
wireless network. The fleet management system connected to a
wireless receiver is capable of receiving the information transmitted
by the lifesaving vehicle .A computer is also located at the
intersection uses corrected vehicle position, speed & direction
measurements, in conjunction with previously recorded data defining
approach routes to the intersection, to determine the optimum time to
switch a traffic light controller to preemption mode so that lifesaving
vehicles can pass safely. In case when the ambulance need to take a
“U" turn in a heavy traffic area we suggest a solution. Now we are
going to make use of computerized median which uses LINKED
BLOCKS (removable) to solve the above problem.
Abstract: This paper presents a speed fuzzy sliding mode
controller for a vector controlled induction machine (IM) fed by a
voltage source inverter (PWM).
The sliding mode based fuzzy control method is developed to
achieve fast response, a best disturbance rejection and to maintain a
good decoupling.
The problem with sliding mode control is that there is high
frequency switching around the sliding mode surface. The FSMC is
the combination of the robustness of Sliding Mode Control (SMC)
and the smoothness of Fuzzy Logic (FL). To reduce the torque
fluctuations (chattering), the sign function used in the conventional
SMC is substituted with a fuzzy logic algorithm.
The proposed algorithm was simulated by Matlab/Simulink
software and simulation results show that the performance of the
control scheme is robust and the chattering problem is solved.
Abstract: Keystroke authentication is a new access control system
to identify legitimate users via their typing behavior. In this paper,
machine learning techniques are adapted for keystroke authentication.
Seven learning methods are used to build models to differentiate user
keystroke patterns. The selected classification methods are Decision
Tree, Naive Bayesian, Instance Based Learning, Decision Table, One
Rule, Random Tree and K-star. Among these methods, three of them
are studied in more details. The results show that machine learning
is a feasible alternative for keystroke authentication. Compared to
the conventional Nearest Neighbour method in the recent research,
learning methods especially Decision Tree can be more accurate. In
addition, the experiment results reveal that 3-Grams is more accurate
than 2-Grams and 4-Grams for feature extraction. Also, combination
of attributes tend to result higher accuracy.