Abstract: Measuring the complexity of software has been an
insoluble problem in software engineering. Complexity measures can
be used to predict critical information about testability, reliability,
and maintainability of software systems from automatic analysis of
the source code. During the past few years, many complexity
measures have been invented based on the emerging Cognitive
Informatics discipline. These software complexity measures,
including cognitive functional size, lend themselves to the approach
of the total cognitive weights of basic control structures such as loops
and branches. This paper shows that the current existing calculation
method can generate different results that are algebraically
equivalence. However, analysis of the combinatorial meanings of this
calculation method shows significant flaw of the measure, which also
explains why it does not satisfy Weyuker's properties. Based on the
findings, improvement directions, such as measures fusion, and
cumulative variable counting scheme are suggested to enhance the
effectiveness of cognitive complexity measures.
Abstract: Changing in consumers lifestyles and food
consumption patterns provide a great opportunity in developing the
functional food sector in Malaysia. There is only a little knowledge
about whether Malaysian consumers are aware of functional food and
if so what image consumers have of this product. The objective of
this research is to determine the extent to which selected socioeconomic
characteristics and attitudes influence consumers-
awareness of functional food. A survey was conducted in the Klang
Valley, Malaysia where 439 respondents were interviewed using a
structured questionnaire. The result shows that most respondents
have a positive attitude towards functional food. For the binary
logistic estimation, the results indicate that age, income and other
factors such as concern about food safety, subscribing to cooking or
health magazines, being a vegetarian and consumers who have been
involved in a food production company significantly influence
Malaysian consumers- awareness towards functional food.
Abstract: The classification of the protein structure is commonly
not performed for the whole protein but for structural domains, i.e.,
compact functional units preserved during evolution. Hence, a first
step to a protein structure classification is the separation of the
protein into its domains. We approach the problem of protein domain
identification by proposing a novel graph theoretical algorithm. We
represent the protein structure as an undirected, unweighted and
unlabeled graph which nodes correspond the secondary structure
elements of the protein. This graph is call the protein graph. The
domains are then identified as partitions of the graph corresponding
to vertices sets obtained by the maximization of an objective function,
which mutually maximizes the cycle distributions found in the
partitions of the graph. Our algorithm does not utilize any other kind
of information besides the cycle-distribution to find the partitions. If
a partition is found, the algorithm is iteratively applied to each of
the resulting subgraphs. As stop criterion, we calculate numerically
a significance level which indicates the stability of the predicted
partition against a random rewiring of the protein graph. Hence,
our algorithm terminates automatically its iterative application. We
present results for one and two domain proteins and compare our
results with the manually assigned domains by the SCOP database
and differences are discussed.
Abstract: In this work, the condensation fraction and transition
temperature of neutral many bosonic system are studied within the
static fluctuation approximation (SFA). The effect of the potential
parameters such as the strength and range on the condensate fraction
was investigated. A model potential consisting of a repulsive step
potential and an attractive potential well was used. As the potential
strength or the core radius of the repulsive part increases, the
condensation fraction is found to be decreased at the same
temperature. Also, as the potential depth or the range of the attractive
part increases, the condensation fraction is found to be increased. The
transition temperature is decreased as the potential strength or the
core radius of the repulsive part increases, and it increases as the
potential depth or the range of the attractive part increases.
Abstract: The effects of seawater and slurry ice bleeding methods on the sensory, microbiological and chemical quality changes of cod fillets during chilled storage were examined in this study. The results from sensory evaluation showed that slurry ice bleeding method prolonged the shelf life of cod fillets up to 13-14 days compared to 10-11 days for fish bled in seawater. Slurry ice bleeding method also led to a slower microbial growth and biochemical developments, resulting lower total plate count (TPC), H2S-producing bacteria count, total volatile basic nitrogen (TVB-N), trimethylamine (TMA), free fatty acid (FFA) content and higher phospholipid content (PL) compared to those of samples bled in seawater. The results of principle component analysis revealed that TPC, H2S-producing bacteria, TVB-N, TMA and FFA were in significant correlation. They were also in negative correlation with sensory evaluation (Torry score), PL and water holding capacity (WHC).
Abstract: Weblogs are resource of social structure to discover and track the various type of information written by blogger. In this paper, we proposed to use mining weblogs technique for identifying the trends of influenza where blogger had disseminated their opinion for the anomaly disease. In order to identify the trends, web crawler is applied to perform a search and generated a list of visited links based on a set of influenza keywords. This information is used to implement the analytics report system for monitoring and analyzing the pattern and trends of influenza (H1N1). Statistical and graphical analysis reports are generated. Both types of the report have shown satisfactory reports that reflect the awareness of Malaysian on the issue of influenza outbreak through blogs.
Abstract: One of the basic concepts in marketing is the concept
of meeting customers- needs. Since customer satisfaction is essential
for lasting survival and development of a business, screening and
observing customer satisfaction and recognizing its underlying
factors must be one of the key activities of every business.
The purpose of this study is to recognize the drivers that effect
customer satisfaction in a business-to-business situation in order to
improve marketing activities. We conducted a survey in which 93
business customers of a manufacturer of Diesel Generator in Iran
participated and they talked about their ideas and satisfaction of
supplier-s services related to its products. We developed the measures
for drivers of satisfaction first by as investigative research (by means
of feedback from executives and customers of sponsoring firm). Then
based on these measures, we created a mail survey, and asked the
respondents to explain their opinion about the sponsoring firm which
was a supplier of diesel generator and similar products. Furthermore,
the survey required the participants to mention their functional areas
and their company features.
In Conclusion we found that there are three drivers for customer
satisfaction, which are reliability, information about product, and
commercial features. Buyers/users from different functional areas
attribute different degree of importance to the last two drivers. For
instance, people from buying and management areas believe that
commercial features are more important than information about
products. But people in engineering, maintenance and production
areas believe that having information about products is more
important than commercial aspects. Marketing experts should
consider the attribute of customers regarding information about the
product and commercial features to improve market share.
Abstract: This research tries to analyze the role that knowledge
about foreign markets has in increasing firms- exports in clustered
spaces. We consider two interrelated sources of knowledge: firms-
direct experience and indirect experience from other clustered firms –
export externalities. In particular, it is proposed that firms would
improve their export performance by accessing to export externalities
if they have some previous direct experience that allows them to
identify, understand and exploit them. Also, we propose that this
positive influence of previous direct experience on export
externalities keeps only up to a point, where it becomes negative,
creating an inverted “U" shape. Empirical evidence gathered among
wine producers located in La Rioja tends to confirm that firms enjoy
of export externalities if they have export experience along several
years and countries increase their export performance. While this
relationship becomes less relevant as they develop a higher
experience, we could not confirm the existence of a curvilinear
relationship in their influence on export externalities and export
performance.
Abstract: Computer modeling has played a unique role in
understanding electrocardiography. Modeling and simulating cardiac
action potential propagation is suitable for studying normal and
pathological cardiac activation. This paper presents a 2-D Cellular
Automata model for simulating action potential propagation in
cardiac tissue. We demonstrate a novel algorithm in order to use
minimum neighbors. This algorithm uses the summation of the
excitability attributes of excited neighboring cells. We try to
eliminate flat edges in the result patterns by inserting probability to
the model. We also preserve the real shape of action potential by
using linear curve fitting of one well known electrophysiological
model.
Abstract: In the last years, the computers have increased their capacity of calculus and networks, for the interconnection of these machines. The networks have been improved until obtaining the actual high rates of data transferring. The programs that nowadays try to take advantage of these new technologies cannot be written using the traditional techniques of programming, since most of the algorithms were designed for being executed in an only processor,in a nonconcurrent form instead of being executed concurrently ina set of processors working and communicating through a network.This paper aims to present the ongoing development of a new system for the reconfiguration of grouping of computers, taking into account these new technologies.
Abstract: In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.
Abstract: In this paper, a simulated annealing algorithm has been developed to optimize machining parameters in turning operation on cylindrical workpieces. The turning operation usually includes several passes of rough machining and a final pass of finishing. Seven different constraints are considered in a non-linear model where the goal is to achieve minimum total cost. The weighted total cost consists of machining cost, tool cost and tool replacement cost. The computational results clearly show that the proposed optimization procedure has considerably improved total operation cost by optimally determining machining parameters.
Abstract: The intelligent fuzzy input estimator is used to estimate
the input force of the rigid bar structural system in this study. The
fuzzy Kalman filter without the input term and the fuzzy weighting
recursive least square estimator are two main portions of this method.
The practicability and accuracy of the proposed method were verified
with numerical simulations from which the input forces of a rigid bar
structural system were estimated from the output responses. In order to
examine the accuracy of the proposed method, a rigid bar structural
system is subjected to periodic sinusoidal dynamic loading. The
excellent performance of this estimator is demonstrated by comparing
it with the use of difference weighting function and improper the
initial process noise covariance. The estimated results have a good
agreement with the true values in all cases tested.
Abstract: The aim of this contribution is to present a new
approach in modeling the electrical activity of the human heart. A
recurrent artificial neural network is being used in order to exhibit a
subset of the dynamics of the electrical behavior of the human heart.
The proposed model can also be used, when integrated, as a
diagnostic tool of the human heart system.
What makes this approach unique is the fact that every model is
being developed from physiological measurements of an individual.
This kind of approach is very difficult to apply successfully in many
modeling problems, because of the complexity and entropy of the
free variables describing the complex system. Differences between
the modeled variables and the variables of an individual, measured at
specific moments, can be used for diagnostic purposes. The sensor
fusion used in order to optimize the utilization of biomedical sensors
is another point that this paper focuses on. Sensor fusion has been
known for its advantages in applications such as control and
diagnostics of mechanical and chemical processes.
Abstract: Saffron (Crocus sativus) is cultivated as spices,
medicinal and aromatic plant species. At autumn season, heavy
rainfall can cause flooding stress and inhibits growth of saffron. Thus
this research was conducted to study the effect of silver ion (as an
ethylene inhibitor) on growth of saffron under flooding conditions.
The corms of saffron were soaked with one concentration of nano
silver (0, 40, 80 or 120 ppm) and then planting under flooding stress
or non flooding stress conditions. Results showed that number of
roots, root length, root fresh and dry weight, leaves fresh and dry
weight were reduced by 10 day flooding stress. Soaking saffron
corms with 40 or 80 ppm concentration of nano silver rewarded the
effect of flooding stress on the root number, by increasing it.
Furthermore, 40 ppm of nano silver increased root length in stress.
Nano silver 80 ppm in flooding stress, increased leaves dry weight.
Abstract: The purpose of this paper was to study motivation
factors affecting job performance effectiveness. This paper drew
upon data collected from an Internal Audit Staffs of Internal Audit
Line of Head Office of Krung Thai Public Company Limited.
Statistics used included frequency, percentage, mean and standard
deviation, t-test, and one-way ANOVA test. The finding revealed that
the majority of the respondents were female of 46 years of age and
over, married and live together, hold a bachelor degree, with an
average monthly income over 70,001 Baht. The majority of
respondents had over 15 years of work experience. They generally
had high working motivation as well as high job performance
effectiveness.
The hypotheses testing disclosed that employees with different
working status had different level of job performance effectiveness at
a 0.01 level of significance. Working motivation factors had an effect
on job performance in the same direction with high level. Individual
working motivation included working completion, reorganization,
working progression, working characteristic, opportunity,
responsibility, management policy, supervision, relationship with
their superior, relationship with co-worker, working position,
working stability, safety, privacy, working conditions, and payment.
All of these factors related to job performance effectiveness in the
same direction with medium level.
Abstract: Proteomics is one of the largest areas of research for
bioinformatics and medical science. An ambitious goal of proteomics
is to elucidate the structure, interactions and functions of all proteins
within cells and organisms. Predicting Protein-Protein Interaction
(PPI) is one of the crucial and decisive problems in current research.
Genomic data offer a great opportunity and at the same time a lot of
challenges for the identification of these interactions. Many methods
have already been proposed in this regard. In case of in-silico
identification, most of the methods require both positive and negative
examples of protein interaction and the perfection of these examples
are very much crucial for the final prediction accuracy. Positive
examples are relatively easy to obtain from well known databases. But
the generation of negative examples is not a trivial task. Current PPI
identification methods generate negative examples based on some
assumptions, which are likely to affect their prediction accuracy.
Hence, if more reliable negative examples are used, the PPI prediction
methods may achieve even more accuracy. Focusing on this issue, a
graph based negative example generation method is proposed, which
is simple and more accurate than the existing approaches. An
interaction graph of the protein sequences is created. The basic
assumption is that the longer the shortest path between two
protein-sequences in the interaction graph, the less is the possibility of
their interaction. A well established PPI detection algorithm is
employed with our negative examples and in most cases it increases
the accuracy more than 10% in comparison with the negative pair
selection method in that paper.
Abstract: In this paper, Optimum adaptive loading algorithms
are applied to multicarrier system with Space-Time Block Coding
(STBC) scheme associated with space-time processing based on
singular-value decomposition (SVD) of the channel matrix over
Rayleigh fading channels. SVD method has been employed in
MIMO-OFDM system in order to overcome subchannel interference.
Chaw-s and Compello-s algorithms have been implemented to obtain
a bit and power allocation for each subcarrier assuming instantaneous
channel knowledge. The adaptive loaded SVD-STBC scheme is
capable of providing both full-rate and full-diversity for any number
of transmit antennas. The effectiveness of these techniques has
demonstrated through the simulation of an Adaptive loaded SVDSTBC
system, and the comparison shown that the proposed
algorithms ensure better performance in the case of MIMO.
Abstract: The communication networks development and
advancement during two last decades has been toward a single goal
and that is gradual change from circuit-switched networks to packed
switched ones. Today a lot of networks operates are trying to
transform the public telephone networks to multipurpose packed
switch. This new achievement is generally called "next generation
networks". In fact, the next generation networks enable the operators
to transfer every kind of services (sound, data and video) on a
network. First, in this report the definition, characteristics and next
generation networks services and then ad-hoc networks role in the
next generation networks are studied.
Abstract: To provide a better understanding of fair share policies supported by current production schedulers and their impact on scheduling performance, A relative fair share policy supported in four well-known production job schedulers is evaluated in this study. The experimental results show that fair share indeed reduces heavy-demand users from dominating the system resources. However, the detailed per-user performance analysis show that some types of users may suffer unfairness under fair share, possibly due to priority mechanisms used by the current production schedulers. These users typically are not heavy-demands users but they have mixture of jobs that do not spread out.