Abstract: Information on weed distribution within the field is necessary to implement spatially variable herbicide application. Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Histogram Maxima with threshold of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: In this study spatial-temporal speckle correlation techniques have been applied for the quality evaluation of three different Indian fruits namely apple, pear and tomato for the first time. The method is based on the analysis of variations of laser light scattered from biological samples. The results showed that crosscorrelation coefficients of biospeckle patterns change subject to their freshness and the storage conditions. The biospeckle activity was determined by means of the cross-correlation functions of the intensity fluctuations. Significant changes in biospeckle activity were observed during their shelf lives. From the study, it is found that the biospeckle activity decreases with the shelf-life storage time. Further it has been shown that biospeckle activity changes according to their respiration rates.
Abstract: One of the determinants of a firm-s prosperity is the
customers- perceived service quality and satisfaction. While service
quality is wide in scope, and consists of various dimensions, there
may be differences in the relative importance of these dimensions in
affecting customers- overall satisfaction of service quality.
Identifying the relative rank of different dimensions of service quality
is very important in that it can help managers to find out which
service dimensions have a greater effect on customers- overall
satisfaction. Such an insight will consequently lead to more effective
resource allocation which will finally end in higher levels of
customer satisfaction. This issue –despite its criticality- has not
received enough attention so far. Therefore, using a sample of 240
bank customers in Iran, an artificial neural network is developed to
address this gap in the literature. As customers- evaluation of service
quality is a subjective process, artificial neural networks –as a brain
metaphor- may appear to have a potentiality to model such a
complicated process. Proposing a neural network which is able to
predict the customers- overall satisfaction of service quality with a
promising level of accuracy is the first contribution of this study. In
addition, prioritizing the service quality dimensions in affecting
customers- overall satisfaction –by using sensitivity analysis of
neural network- is the second important finding of this paper.
Abstract: This study sought to determine whether there were relationships existed among leisure satisfaction, self-esteem, and spiritual wellness. Four hundred survey instruments were distributed, and 334 effective instruments were returned, for an effective rate of 83.5%. The participants were recruited from a purposive sampling that subjects were at least 60 years of age and retired in Tainan City, Taiwan. Three instruments were used in this research: Leisure Satisfaction Scale (LSS), Self-Esteem Scale (SES), and Spirituality Assessment Scale (SAS). The collected data were analyzed statistically. The findings of this research were as follows: 1. There is significantly correlated between leisure satisfaction and spiritual wellness. 2. There is significantly correlated between leisure satisfaction and self-esteem. 3. There is significantly correlated between spiritual wellness and self-esteem.
Abstract: This paper presents the design related to the
electronic system design of the respiratory signal, including phases
for processing, followed by the transmission and reception of this
signal and finally display. The processing of this signal is added to
the ECG and temperature sign, put up last year. Under this scheme is
proposed that in future also be conditioned blood pressure signal
under the same final printed circuit and worked.
Abstract: In projects like waterpower, transportation and
mining, etc., proving up the rock-mass structure and hidden tectonic
to estimate the geological body-s activity is very important.
Integrating the seismic results, drilling and trenching data,
CSAMT method was carried out at a planning dame site in southwest
China to evaluate the stability of a deformation. 2D and imitated 3D
inversion resistivity results of CSAMT method were analyzed. The
results indicated that CSAMT was an effective method for defining
an outline of deformation body to several hundred meters deep; the
Lung Pan Deformation was stable in natural conditions; but uncertain
after the future reservoir was impounded.
This research presents a good case study of the fine surveying and
research on complex geological structure and hidden tectonic in
engineering project.
Abstract: A new digital transceiver circuit for asynchronous frame detection is proposed where both the transmitter and receiver contain all digital components, thereby avoiding possible use of conventional devices like monostable multivibrators with unstable external components such as resistances and capacitances. The proposed receiver circuit, in particular, uses a combinational logic block yielding an output which changes its state as soon as the start bit of a new frame is detected. This, in turn, helps in generating an efficient receiver sampling clock. A data latching circuit is also used in the receiver to latch the recovered data bits in any new frame. The proposed receiver structure is also extended from 4- bit information to any general n data bits within a frame with a common expression for the output of the combinational logic block. Performance of the proposed hardware design is evaluated in terms of time delay, reliability and robustness in comparison with the standard schemes using monostable multivibrators. It is observed from hardware implementation that the proposed circuit achieves almost 33 percent speed up over any conventional circuit.
Abstract: In this paper, a new technique for fast painting with
different colors is presented. The idea of painting relies on applying
masks with different colors to the background. Fast painting is
achieved by applying these masks in the frequency domain instead of
spatial (time) domain. New colors can be generated automatically as a
result from the cross correlation operation. This idea was applied
successfully for faster specific data (face, object, pattern, and code)
detection using neural algorithms. Here, instead of performing cross
correlation between the input input data (e.g., image, or a stream of
sequential data) and the weights of neural networks, the cross
correlation is performed between the colored masks and the
background. Furthermore, this approach is developed to reduce the
computation steps required by the painting operation. The principle of
divide and conquer strategy is applied through background
decomposition. Each background is divided into small in size subbackgrounds
and then each sub-background is processed separately by
using a single faster painting algorithm. Moreover, the fastest painting
is achieved by using parallel processing techniques to paint the
resulting sub-backgrounds using the same number of faster painting
algorithms. In contrast to using only faster painting algorithm, the
speed up ratio is increased with the size of the background when using
faster painting algorithm and background decomposition. Simulation
results show that painting in the frequency domain is faster than that in
the spatial domain.
Abstract: The aim of this work was to investigate the potential of soil microorganisms and the burhead plant, as well as the combination of soil microorganisms and plants to remediate monoethylene glycol (MEG), diethylene glycol (DEG), and triethylene glycol (TEG) in synthetic wastewater. The result showed that a system containing both burhead plant and soil microorganisms had the highest efficiency in EGs removal. Around 100% of MEG and DEG and 85% of TEG were removed within 15 days of the experiments. However, the burhead plant had higher removal efficiency than soil microorganisms for MEG and DEG but the same for TEG in the study systems. The removal rate of EGs in the study system related to the molecular weight of the compounds and MEG, the smallest glycol, was removed faster than DEG and TEG by both the burhead plant and soil microorganisms in the study system.
Abstract: Artificial Neural Network (ANN) has been
extensively used for classification of heart sounds for its
discriminative training ability and easy implementation. However, it
suffers from overparameterization if the number of nodes is not
chosen properly. In such cases, when the dataset has redundancy
within it, ANN is trained along with this redundant information that
results in poor validation. Also a larger network means more
computational expense resulting more hardware and time related
cost. Therefore, an optimum design of neural network is needed
towards real-time detection of pathological patterns, if any from heart
sound signal. The aims of this work are to (i) select a set of input
features that are effective for identification of heart sound signals and
(ii) make certain optimum selection of nodes in the hidden layer for a
more effective ANN structure. Here, we present an optimization
technique that involves Singular Value Decomposition (SVD) and
QR factorization with column pivoting (QRcp) methodology to
optimize empirically chosen over-parameterized ANN structure.
Input nodes present in ANN structure is optimized by SVD followed
by QRcp while only SVD is required to prune undesirable hidden
nodes. The result is presented for classifying 12 common
pathological cases and normal heart sound.
Abstract: A wireless Ad-hoc network consists of wireless nodes
communicating without the need for a centralized administration, in
which all nodes potentially contribute to the routing process.In this
paper, we report the simulation results of four different scenarios for
wireless ad hoc networks having thirty nodes. The performances of
proposed networks are evaluated in terms of number of hops per
route, delay and throughput with the help of OPNET simulator.
Channel speed 1 Mbps and simulation time 600 sim-seconds were
taken for all scenarios. For the above analysis DSR routing protocols
has been used. The throughput obtained from the above analysis
(four scenario) are compared as shown in Figure 3. The average
media access delay at node_20 for two routes and at node_20 for four
different scenario are compared as shown in Figures 4 and 5. It is
observed that the throughput will degrade when it will follow
different hops for same source to destination (i.e. it has dropped from
1.55 Mbps to 1.43 Mbps which is around 9.7%, and then dropped to
0.48Mbps which is around 35%).
Abstract: Cloud Computing (CC) has become one of the most
talked about emerging technologies that provides powerful
computing and large storage environments through the use of the
Internet. Cloud computing provides different dynamically scalable
computing resources as a service. It brings economic benefits to
individuals and businesses that adopt the technology. In theory
adoption of cloud computing reduces capital and operational
expenditure on information technology. For this to be a reality there
is need to solve some challenges and at the same time addressing
concerns that consumers have about cloud computing. This paper
looks at Cloud Computing in general then highlights the challenges
of Cloud Computing and finally suggests solutions to some of the
challenges.
Abstract: This paper deals with efficient quadrature formulas involving functions that are observed only at fixed sampling points. The approach that we develop is derived from efficient continuous quadrature formulas, such as Gauss-Legendre or Clenshaw-Curtis quadrature. We select nodes at sampling positions that are as close as possible to those of the associated classical quadrature and we update quadrature weights accordingly. We supply the theoretical quadrature error formula for this new approach. We show on examples the potential gain of this approach.
Abstract: Capacity and efficiency of any refrigerating system
diminish rapidly as the difference between the evaporating and
condensing temperature is increased by a reduction in the evaporator
temperature. The single stage vapour compression refrigeration
system using various refrigerants are limited to an evaporator
temperature of -40 0C. Below temperature of -40 0C the either
cascade refrigeration system or multi stage vapour compression
system is employed. Present work describes thermal design of
condenser (HTS), cascade condenser and evaporator (LTS) of
R404A-R508B and R410A-R23 cascade refrigeration system. Heat
transfer area of condenser, cascade condenser and evaporator for
both systems are compared and the effect of condenser and
evaporator temperature on heat-transfer area for both systems is
studied under same operating condition. The results shows that the
required heat-transfer area of condenser and cascade condenser for
R410A-R23 cascade system is lower than the R404A-R508B cascade
system but heat transfer area of evaporator is similar for both the
system. The heat transfer area of condenser and cascade condenser
decreases with increase in condenser temperature (Tc), whereas the
heat transfer area of cascade condenser and evaporator increases with
increase in evaporator temperature (Te).
Abstract: The paper makes part from a complex research project
on Romanian Grey Steppe, a unique breed in terms of biological and
cultural-historical importance, on the verge of extinction and which
has been included in a preservation programme of genetic resources
from Romania. The study of genetic polymorphism of protean
fractions, especially kappa-casein, and the genotype relations of
these lactoproteins with some quantitative and qualitative features of
milk yield represents a current theme and a novelty for this breed. In
the estimation of the genetic parameters we used R.E.M.L.
(Restricted Maximum Likelihood) method.
The main lactoprotein from milk, kappa - casein (K-cz),
characterized in the specialized literature as a feature having a high
degree of hereditary transmission, behaves as such in the nucleus under
study, a value also confirmed by the heritability coefficient (h2 = 0.57
%). We must mention the medium values for milk and fat quantity
(h2=0.26, 0.29 %) and the fat and protein percentage from milk
having a high hereditary influence h2 = 0.71 - 0.63 %.
Correlations between kappa-casein and the milk quantity are
negative and strong. Between kappa-casein and other qualitative
features of milk (fat content 0.58-0.67 % and protein content 0.77-
0.87%), there are positive and very strong correlations. At the same
time, between kappa-casein and β casein (β-cz), β lactoglobulin (β-
lg) respectively, correlations are positive having high values (0.37 –
0.45 %), indicating the same causes and determining factors for the
two groups of features.
Abstract: The existence of maximal durations drastically modifies the performance evaluation in Discrete Event Systems (DES). The same particularity may be found on systems where the associated constraints do not concern the time. For example weight measures, in chemical industry, are used in order to control the quantity of consumed raw materials. This parameter also takes a fundamental part in the product quality as the correct transformation process is based upon a given percentage of each essence. Weight regulation therefore increases the global productivity of the system by decreasing the quantity of rejected products. In this paper we present an approach based on mixing different characteristics theories, the fuzzy system and Petri net system to describe the behaviour. An industriel application on a tobacco manufacturing plant, where the critical parameter is the weight is presented as an illustration.
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.
Abstract: In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.