Abstract: This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper considers the indirect minimum Jerk
method for higher order differential equation in dynamics
optimization proposes a simple yet very interesting indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of indirect jerks are found using the dynamic optimization methods
together with the numerical approximation. This case considers the
linear equation of a simple system, for instance, mass, spring and
damping. The simple system uses two mass connected together by
springs. The boundary initial is defined the fix end time and end
point. The higher differential order is solved by Galerkin-s methods
weight residual. As the result, the 6th higher differential order shows
the faster solving time.
Abstract: A novel physico-chemical route to produce few layer graphene nanoribbons with atomically smooth edges is reported, via acid treatment (H2SO4:HNO3) followed by characteristic thermal shock processes involving extremely cold substances. Samples were studied by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Raman spectroscopy and X-ray photoelectron spectroscopy. This method demonstrates the importance of having the nanotubes open ended for an efficient uniform unzipping along the nanotube axis. The average dimensions of these nanoribbons are approximately ca. 210 nm wide and consist of few layers, as observed by transmission electron microscopy. The produced nanoribbons exhibit different chiralities, as observed by high resolution transmission electron microscopy. This method is able to provide graphene nanoribbons with atomically smooth edges which could be used in various applications including sensors, gas adsorption materials, composite fillers, among others.
Abstract: Axial Flux Permanent Magnet (AFPM) Machines require effective cooling due to their high power density. The detrimental effects of overheating such as degradation of the insulation materials, magnets demagnetization, and increase of Joule losses are well known. This paper describes the CFD simulations performed on a test rig model of an air cooled Axial Flux Permanent Magnet (AFPM) generator built at Durham University to identify the temperatures and heat transfer coefficient on the stator. The Reynolds Averaged Navier-Stokes and the Energy equations are solved and the flow pattern and heat transfer developing inside the machine are described. The Nusselt number on the stator surfaces has been found. The dependency of the heat transfer on the flow field is described temperature field obtained. Tests on an experimental are undergoing in order to validate the CFD results.
Abstract: The aim of this study was to develop a dynamic cardiac phantom for quality control in myocardial scintigraphy. The dynamic heart phantom constructed only contained the left ventricle, made of elastic material (latex), comprising two cavities: one internal and one external. The data showed a non-significant variation in the values of left ventricular ejection fraction (LVEF) obtained by varying the heart rate. It was also possible to evaluate the ejection fraction (LVEF) through different arrays of image acquisition and to perform an intercomparison of LVEF by two different scintillation cameras. The results of the quality control tests were satisfactory, showing that they can be used as parameters in future assessments. The new dynamic heart phantom was demonstrated to be effective for use in LVEF measurements. Therefore, the new heart simulator is useful for the quality control of scintigraphic cameras.
Abstract: This paper is devoted to present and discuss a model that allows a local segmentation by using statistical information of a given image. It is based on Chan-Vese model, curve evolution, partial differential equations and binary level sets method. The proposed model uses the piecewise constant approximation of Chan-Vese model to compute Signed Pressure Force (SPF) function, this one attracts the curve to the true object(s)-s boundaries. The implemented model is used to extract weld defects from weld radiographic images in the aim to calculate the perimeter and surfaces of those weld defects; encouraged resultants are obtained on synthetic and real radiographic images.
Abstract: In this work, biohydrogen production via dark
fermentation from alcohol wastewater using upflow anaerobic sludge
blanket reactors (UASB) with a working volume of 4 L was
investigated to find the optimum conditions for a maximum hydrogen
yield. The system was operated at different COD loading rates (23,
31, 46 and 62 kg/m3d) at mesophilic temperature (37 ºC) and pH 5.5.
The seed sludge was pretreated before being fed to the UASB system
by boiling at 95 ºC for 15 min. When the system was operated under
the optimum COD loading rate of 46 kg/m3d, it provided the
hydrogen content of 27%, hydrogen yield of 125.1 ml H2/g COD
removed and 95.1 ml H2/g COD applied, hydrogen production rate of
18 l/d, specific hydrogen production rate of 1080 ml H2/g MLVSS d
and 1430 ml H2/ L d, and COD removal of 24%.
Abstract: To create a solution for a specific problem in machine
learning, the solution is constructed from the data or by use a search
method. Genetic algorithms are a model of machine learning that can
be used to find nearest optimal solution. While the great advantage of
genetic algorithms is the fact that they find a solution through
evolution, this is also the biggest disadvantage. Evolution is inductive,
in nature life does not evolve towards a good solution but it evolves
away from bad circumstances. This can cause a species to evolve into
an evolutionary dead end. In order to reduce the effect of this
disadvantage we propose a new a learning tool (criteria) which can be
included into the genetic algorithms generations to compare the
previous population and the current population and then decide
whether is effective to continue with the previous population or the
current population, the proposed learning tool is called as Keeping
Efficient Population (KEP). We applied a GA based on KEP to the
production line layout problem, as a result KEP keep the evaluation
direction increases and stops any deviation in the evaluation.
Abstract: Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.
Abstract: Aggression is a multi- factorial concept and multilevel
in nature. The Young Adolescent is being influenced by family,
school and community. This paper is aimed to determine the
following: aggression level among young adolescents, difference of
level of aggression on school and year levels and to determine the
correlates of aggression. There were 142 high school students from
two different national highs schools (Region 3 and National Capital
Region).Convenience sampling was use in this study. The following
measures were used namely: Aggression Scale, Parental Support
Fighting Scale, Positive Behavior Scale and Exposure to Violence
and Trauma questionnaire. There was no significant difference in
aggression level among different year level and schools. The
findings of the study suggested that high level of community violence
and having low parental support for non-aggressive behavior
contribute to the prediction of aggression.
Abstract: Cluster analysis divides data into groups that are
meaningful, useful, or both. Analysis of biological data is creating a
new generation of epidemiologic, prognostic, diagnostic and
treatment modalities. Clustering of protein sequences is one of the
current research topics in the field of computer science. Linear
relation is valuable in rule discovery for a given data, such as if value
X goes up 1, value Y will go down 3", etc. The classical linear
regression models the linear relation of two sequences perfectly.
However, if we need to cluster a large repository of protein sequences
into groups where sequences have strong linear relationship with
each other, it is prohibitively expensive to compare sequences one by
one. In this paper, we propose a new technique named General
Regression Model Technique Clustering Algorithm (GRMTCA) to
benignly handle the problem of linear sequences clustering. GRMT
gives a measure, GR*, to tell the degree of linearity of multiple
sequences without having to compare each pair of them.
Abstract: This study aims to identify processes, current
situations, and issues of recycling systems for four home appliances,
namely, air conditioners, television receivers, refrigerators, and
washing machines, among e-wastes in China and Japan for
understanding and comparison of their characteristics. In accordance
with results of a literature search, review of information disclosed
online, and questionnaire survey conducted, conclusions of the study
boil down to:
(1)The results show that in Japan most of the home appliances
mentioned above have been collected through home appliance
recycling tickets, resulting in an issue of “requiring some effort" in
treatment and recycling stages, and most plants have contracted out
their e-waste recycling.
(2)It is found out that advantages of the recycling system in Japan
include easiness to monitor concrete data and thorough
environmental friendliness ensured while its disadvantages include
illegal dumping and export. It becomes apparent that advantages of
the recycling system in China include a high reuse rate, low
treatment cost, and fewer illegal dumping while its disadvantages
include less safe reused products, environmental pollution caused by
e-waste treatment, illegal import, and difficulty in obtaining data.
Abstract: In two studies we challenged the well consolidated
position in regret literature according to which the necessary
condition for the emergence of regret is a bad outcome ensuing from
free decisions. Without free choice, and, consequently, personal
responsibility, other emotions, such as disappointment, but not regret,
are supposed to be elicited. In our opinion, a main source of regret is
being obliged by circumstance out of our control to chose an
undesired option. We tested the hypothesis that regret resulting from
a forced choice is more intense than regret derived from a free choice
and that the outcome affects the latter, not the former. Besides, we
investigated whether two other variables – the perception of the level
of freedom of the choice and the choice justifiability – mediated the
relationships between choice and regret, as well as the other four
emotions we examined: satisfaction, anger toward oneself,
disappointment, anger towards circumstances. The two studies were
based on the scenario methodology and implied a 2 x 2 (choice x
outcome) between design. In the first study the foreseen short-term
effects of the choice were assessed; in the second study the
experienced long-term effects of the choice were assessed. In each
study 160 students of the Second University of Naples participated.
Results largely corroborated our hypotheses. They were discussed in
the light of the main theories on regret and decision making.
Abstract: In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.
Abstract: The purpose of this paper is to present the fuzzy contraction
properties of the Hutchinson-Barnsley operator on the fuzzy
hyperspace with respect to the Hausdorff fuzzy metrics. Also we
discuss about the relationships between the Hausdorff fuzzy metrics
on the fuzzy hyperspaces. Our theorems generalize and extend some
recent results related with Hutchinson-Barnsley operator in the metric
spaces.
Abstract: Along with the advances in medicine, providing medical information to individual patient is becoming more important. In Japan such information via Braille is hardly provided to blind and partially sighted people. Thus we are researching and developing a Web-based automatic translation program “eBraille" to translate Japanese text into Japanese Braille. First we analyzed the Japanese transcription rules to implement them on our program. We then added medical words to the dictionary of the program to improve its translation accuracy for medical text. Finally we examined the efficacy of statistical learning models (SLMs) for further increase of word segmentation accuracy in braille translation. As a result, eBraille had the highest translation accuracy in the comparison with other translation programs, improved the accuracy for medical text and is utilized to make hospital brochures in braille for outpatients and inpatients.
Abstract: In this paper, two centrifugal model tests (case 1: raft
foundation, case 2: 2x2 piled raft foundation) were conducted in
order to evaluate the effect of ground subsidence on load sharing
among piles and raft and settlement of raft and piled raft
foundations. For each case, two conditions consisting of undrained
(without groundwater pumping) and drained (with groundwater
pumping) conditions were considered. Vertical loads were applied
to the models after the foundations were completely consolidated by
selfweight at 50g. The results show that load sharing by the piles in
piled raft foundation (piled load share) for drained condition
decreases faster than that for undrained condition. Settlement of
both raft and piled raft foundations for drained condition increases
more quickly than that for undrained condition. In addition, the
settlement of raft foundation increases more largely than the
settlement of piled raft foundation for drained condition.
Abstract: Virtual environment induces simulator sickness effect
for some users. The purpose of this research is to compare the
simulation sickness relative with parallax affect in one-screen and
three-screen HoloStageTM system, measured by Simulation Sickness
Questionnaire (SSQ). The results show the subjects tested in
three-screen has less sickness than one-screen and effect from the
Oculomotor (O) more than from the Disorientation (D) and more than
from the Nausea (N) or represented in O>D>N.
Abstract: Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.
Abstract: Raisin Concentrate (RC) are the most important
products obtained in the raisin processing industries. These RC
products are now used to make the syrups, drinks and confectionery
productions and introduced as natural substitute for sugar in food
applications. Iran is a one of the biggest raisin exporter in the world
but unfortunately despite a good raw material, no serious effort to
extract the RC has been taken in Iran. Therefore, in this paper, we
determined and analyzed affected parameters on extracting RC
process and then optimizing these parameters for design the
extracting RC process in two types of raisin (round and long)
produced in Khorasan region. Two levels of solvent (1:1 and 2:1),
three levels of extraction temperature (60°C, 70°C and 80°C), and
three levels of concentration temperature (50°C, 60°C and 70°C)
were the treatments. Finally physicochemical characteristics of the
obtained concentrate such as color, viscosity, percentage of reduction
sugar, acidity and the microbial tests (mould and yeast) were
counted. The analysis was performed on the basis of factorial in the
form of completely randomized design (CRD) and Duncan's multiple
range test (DMRT) was used for the comparison of the means.
Statistical analysis of results showed that optimal conditions for
production of concentrate is round raisins when the solvent ratio was
2:1 with extraction temperature of 60°C and then concentration
temperature of 50°C. Round raisin is cheaper than the long one, and
it is more economical to concentrate production. Furthermore, round
raisin has more aromas and the less color degree with increasing the
temperature of concentration and extraction. Finally, according to
mentioned factors the concentrate of round raisin is recommended.