Abstract: The ever increasing product diversity and competition on the market of goods and services has dictated the pace of growth in the number of advertisements. Despite their admittedly diminished effectiveness over the recent years, advertisements remain the favored method of sales promotion. Consequently, the challenge for an advertiser is to explore every possible avenue of making an advertisement more noticeable, attractive and impellent for consumers. One way to achieve this is through invoking celebrity endorsements. On the one hand, the use of a celebrity to endorse a product involves substantial costs, however, on the other hand, it does not immediately guarantee the success of an advertisement. The question of how celebrities can be used in advertising to the best advantage is therefore of utmost importance. Celebrity endorsements have become commonplace: empirical evidence indicates that approximately 20 to 25 per cent of advertisements feature some famous person as a product endorser. The popularity of celebrity endorsements demonstrates the relevance of the topic, especially in the context of the current global economic downturn, when companies are forced to save in order to survive, yet simultaneously to heavily invest in advertising and sales promotion. The issue of the effective use of celebrity endorsements also figures prominently in the academic discourse. The study presented below is thus aimed at exploring what qualities (characteristics) of a celebrity endorser have an impact on the ffectiveness of the advertisement in which he/she appears and how.
Abstract: This paper suggests a new Affine Projection (AP) algorithm with variable data-reuse factor using the condition number as a decision factor. To reduce computational burden, we adopt a recently reported technique which estimates the condition number of an input data matrix. Several simulations show that the new algorithm has better performance than that of the conventional AP algorithm.
Abstract: Feature selection has recently been the subject of intensive research in data mining, specially for datasets with a large number of attributes. Recent work has shown that feature selection can have a positive effect on the performance of machine learning algorithms. The success of many learning algorithms in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. In this paper, a novel feature search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.
Abstract: Metal matrix composites (MMC) are generating
extensive interest in diverse fields like defense, aerospace, electronics
and automotive industries. In this present investigation, material
removal rate (MRR) modeling has been carried out using an
axisymmetric model of Al-SiC composite during electrical discharge
machining (EDM). A FEA model of single spark EDM was
developed to calculate the temperature distribution.Further, single
spark model was extended to simulate the second discharge. For
multi-discharge machining material removal was calculated by
calculating the number of pulses. Validation of model has been done
by comparing the experimental results obtained under the same
process parameters with the analytical results. A good agreement was
found between the experimental results and the theoretical value.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: A simple and easy algorithm is presented for a fast calculation of kernel functions which required in fluid simulations using the Smoothed Particle Hydrodynamic (SPH) method. Present proposed algorithm improves the Linked-list algorithm and adopts the Pair-Wise Interaction technique, which are widely used for evaluating kernel functions in fluid simulations using the SPH method. The algorithm is easy to be implemented without any complexities in programming. Some benchmark examples are used to show the simulation time saved by using the proposed algorithm. Parametric studies on the number of divisions for sub-domains, smoothing length and total amount of particles are conducted to show the effectiveness of the present technique. A compact formulation is proposed for practical usage.
Abstract: The motion of a sphere moving along the axis of a
rotating viscous fluid is studied at high Reynolds numbers and
moderate values of Taylor number. The Higher Order Compact
Scheme is used to solve the governing Navier-Stokes equations. The
equations are written in the form of Stream function, Vorticity
function and angular velocity which are highly non-linear, coupled
and elliptic partial differential equations. The flow is governed by
two parameters Reynolds number (Re) and Taylor number (T). For
very low values of Re and T, the results agree with the available
experimental and theoretical results in the literature. The results are
obtained at higher values of Re and moderate values of T and
compared with the experimental results. The results are fourth order
accurate.
Abstract: In field of Computer Science and Mathematics,
sorting algorithm is an algorithm that puts elements of a list in a
certain order i.e. ascending or descending. Sorting is perhaps the
most widely studied problem in computer science and is frequently
used as a benchmark of a system-s performance. This paper
presented the comparative performance study of four sorting
algorithms on different platform. For each machine, it is found that
the algorithm depends upon the number of elements to be sorted. In
addition, as expected, results show that the relative performance of
the algorithms differed on the various machines. So, algorithm
performance is dependent on data size and there exists impact of
hardware also.
Abstract: Soursop (Anona muricata) is one of the underutilized tropical fruits containing nutrients, particularly dietary fibre and antioxidant properties that are beneficial to human health. This objective of this study is to investigate the feasibility of matured soursop pulp flour (SPF) to be substituted with high-protein wheat flour in bread. Bread formulation was substituted with different levels of SPF (0%, 5%, 10% and 15%). The effect on physicochemical properties and sensory attributes were evaluated. Higher substitution level of SPF resulted in significantly higher (p
Abstract: Precise frequency estimation methods for pulseshaped echoes are a prerequisite to determine the relative velocity between sensor and reflector. Signal frequencies are analysed using three different methods: Fourier Transform, Chirp ZTransform and the MUSIC algorithm. Simulations of echoes are performed varying both the noise level and the number of reflecting points. The superposition of echoes with a random initial phase is found to influence the precision of frequency estimation severely for FFT and MUSIC. The standard deviation of the frequency using FFT is larger than for MUSIC. However, MUSIC is more noise-sensitive. The distorting effect of superpositions is less pronounced in experimental data.
Abstract: High level and high velocity flood flows are
potentially harmful to bridge piers as evidenced in many toppled
piers, and among them the single-column piers were considered as
the most vulnerable. The flood flow characteristic parameters
including drag coefficient, scouring and vortex shedding are built into
a pier-flood interaction model to investigate structural safety against
flood hazards considering the effects of local scouring, hydrodynamic
forces, and vortex induced resonance vibrations. By extracting the
pier-flood simulation results embedded in a neural networks code,
two cases of pier toppling occurred in typhoon days were reexamined:
(1) a bridge overcome by flash flood near a mountain side;
(2) a bridge washed off in flood across a wide channel near the
estuary. The modeling procedures and simulations are capable of
identifying the probable causes for the tumbled bridge piers during
heavy floods, which include the excessive pier bending moments and
resonance in structural vibrations.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: In the recent past, there has been an increasing interest
in applying evolutionary methods to Knowledge Discovery in
Databases (KDD) and a number of successful applications of Genetic
Algorithms (GA) and Genetic Programming (GP) to KDD have been
demonstrated. The most predominant representation of the
discovered knowledge is the standard Production Rules (PRs) in the
form If P Then D. The PRs, however, are unable to handle
exceptions and do not exhibit variable precision. The Censored
Production Rules (CPRs), an extension of PRs, were proposed by
Michalski & Winston that exhibit variable precision and supports an
efficient mechanism for handling exceptions. A CPR is an
augmented production rule of the form:
If P Then D Unless C, where C (Censor) is an exception to the rule.
Such rules are employed in situations, in which the conditional
statement 'If P Then D' holds frequently and the assertion C holds
rarely. By using a rule of this type we are free to ignore the exception
conditions, when the resources needed to establish its presence are
tight or there is simply no information available as to whether it
holds or not. Thus, the 'If P Then D' part of the CPR expresses
important information, while the Unless C part acts only as a switch
and changes the polarity of D to ~D.
This paper presents a classification algorithm based on evolutionary
approach that discovers comprehensible rules with exceptions in the
form of CPRs.
The proposed approach has flexible chromosome encoding, where
each chromosome corresponds to a CPR. Appropriate genetic
operators are suggested and a fitness function is proposed that
incorporates the basic constraints on CPRs. Experimental results are
presented to demonstrate the performance of the proposed algorithm.
Abstract: This paper and its companion (Part 2) deal with
modeling and optimization of two NP-hard problems in production
planning of flexible manufacturing system (FMS), part type selection
problem and loading problem. The part type selection problem and
the loading problem are strongly related and heavily influence the
system-s efficiency and productivity. The complexity of the problems
is harder when flexibilities of operations such as the possibility of
operation processed on alternative machines with alternative tools are
considered. These problems have been modeled and solved
simultaneously by using real coded genetic algorithms (RCGA)
which uses an array of real numbers as chromosome representation.
These real numbers can be converted into part type sequence and
machines that are used to process the part types. This first part of the
papers focuses on the modeling of the problems and discussing how
the novel chromosome representation can be applied to solve the
problems. The second part will discuss the effectiveness of the
RCGA to solve various test bed problems.
Abstract: Radio propagation from point-to-point is affected by
the physical channel in many ways. A signal arriving at a destination
travels through a number of different paths which are referred to as
multi-paths. Research in this area of wireless communications has
progressed well over the years with the research taking different
angles of focus. By this is meant that some researchers focus on
ways of reducing or eluding Multipath effects whilst others focus on
ways of mitigating the effects of Multipath through compensation
schemes. Baseband processing is seen as one field of signal
processing that is cardinal to the advancement of software defined
radio technology. This has led to wide research into the carrying out
certain algorithms at baseband. This paper considers compensating
for Multipath for Frequency Modulated signals. The compensation
process is carried out at Radio frequency (RF) and at Quadrature
baseband (QBB) and the results are compared. Simulations are
carried out using MatLab so as to show the benefits of working at
lower QBB frequencies than at RF.
Abstract: Physiological activity of the pineal gland with specific
responses in the reproductive territory may be interpreted by
monitoring the process parameters used in poultry practice in
different age batches of laying hens. As biological material were
used 105 laying hens, clinically healthy, belonging to ALBO SL-
2000 hybrid, raised on ground, from which blood samples were taken
at the age of 12 and 28 weeks. The haematological examinations
were concerned to obtain the total number of erythrocytes and
leukocytes and the main erythrocyte constant (RBC, PCV, MCV,
MCH, MCHC and WBC). The results allow the interpretation of the
reproductive status through the dynamics of the presented values.
Abstract: This article proposes modeling, simulation and
kinematic and workspace analysis of a spatial cable suspended robot
as incompletely Restrained Positioning Mechanism (IRPM). These
types of robots have six cables equal to the number of degrees of
freedom. After modeling, the kinds of workspace are defined then an
statically reachable combined workspace for different geometric
structures of fixed and moving platform is obtained. This workspace
is defined as the situations of reference point of the moving platform
(center of mass) which under external forces such as weight and with
ignorance of inertial effects, the moving platform should be in static
equilibrium under conditions that length of all cables must not be
exceeded from the maximum value and all of cables must be at
tension (they must have non-negative tension forces). Then the effect
of various parameters such as the size of moving platform, the size of
fixed platform, geometric configuration of robots, magnitude of
applied forces and moments to moving platform on workspace of
these robots with different geometric configuration are investigated.
Obtained results should be effective in employing these robots under
different conditions of applied wrench for increasing the workspace
volume.
Abstract: The paper discusses the results obtained to predict
reinforcement in singly reinforced beam using Neural Net (NN),
Support Vector Machines (SVM-s) and Tree Based Models. Major
advantage of SVM-s over NN is of minimizing a bound on the
generalization error of model rather than minimizing a bound on
mean square error over the data set as done in NN. Tree Based
approach divides the problem into a small number of sub problems to
reach at a conclusion. Number of data was created for different
parameters of beam to calculate the reinforcement using limit state
method for creation of models and validation. The results from this
study suggest a remarkably good performance of tree based and
SVM-s models. Further, this study found that these two techniques
work well and even better than Neural Network methods. A
comparison of predicted values with actual values suggests a very
good correlation coefficient with all four techniques.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: this study was carried out to investigate the changes in
quality parameters of rye bread packaged in different polymer films
during convection air-flow thermal treatment process. Whole loafs of
bread were placed in polymer pouches, which were sealed in reduced
pressure air ambiance, bread was thermally treated in
at temperature +(130; 140; and 150) ± 5 ºC within 40min, as long as
the core temperature of the samples have reached accordingly
+80±1 ºC. For bread packaging pouches were used: anti-fog
Mylar®OL12AF and thermo resistant combined polymer material.
Main quality parameters was analysed using standard methods:
temperature in bread core, bread crumb and crust firmness value,
starch granules volume and microflora. In the current research it was
proved, that polymer films significantly influence rye bread quality
parameters changes during thermal treatment. Thermo resistant
combined polymer material film could be recommendable for
packaged rye bread pasteurization, for maximal bread quality
parameter keeping.