Abstract: Today global warming, climate change and energy supply are of greater concern as it is widely realized that the planet earth does not provide an infinite capacity for absorbing human industrialization in the 21st century. The aim of this paper is to analyze upstream and downstream electricity production in selected case studies: a coal power plant, a pump system and a microwave oven covering and consumption to explore the position of energy efficiency in engineering sustainability. Collectively, the analysis presents energy efficiency as a major pathway towards sustainability that requires an inclusive and a holistic supply chain response in the engineering design process.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: The classic problem of recovering arbitrary values of
a band-limited signal from its samples has an added complication
in software radio applications; namely, the resampling calculations
inevitably fold aliases of the analog signal back into the original
bandwidth. The phenomenon is quantified by the spur-free dynamic
range. We demonstrate how a novel application of the Remez (Parks-
McClellan) algorithm permits optimal signal recovery and SFDR, far
surpassing state-of-the-art resamplers.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: The present paper develops and validates a numerical procedure for the calculation of turbulent combustive flow in converging and diverging ducts and throuh simulation of the heat transfer processes, the amount of production and spread of Nox pollutant has been measured. A marching integration solution procedure employing the TDMA is used to solve the discretized equations. The turbulence model is the Prandtl Mixing Length method. Modeling the combustion process is done by the use of Arrhenius and Eddy Dissipation method. Thermal mechanism has been utilized for modeling the process of forming the nitrogen oxides. Finite difference method and Genmix numerical code are used for numerical solution of equations. Our results indicate the important influence of the limiting diverging angle of diffuser on the coefficient of recovering of pressure. Moreover, due to the intense dependence of Nox pollutant to the maximum temperature in the domain with this feature, the Nox pollutant amount is also in maximum level.
Abstract: Data Mining aims at discovering knowledge out of
data and presenting it in a form that is easily comprehensible to
humans. One of the useful applications in Egypt is the Cancer
management, especially the management of Acute Lymphoblastic
Leukemia or ALL, which is the most common type of cancer in
children.
This paper discusses the process of designing a prototype that can
help in the management of childhood ALL, which has a great
significance in the health care field. Besides, it has a social impact
on decreasing the rate of infection in children in Egypt. It also
provides valubale information about the distribution and
segmentation of ALL in Egypt, which may be linked to the possible
risk factors.
Undirected Knowledge Discovery is used since, in the case of this
research project, there is no target field as the data provided is
mainly subjective. This is done in order to quantify the subjective
variables. Therefore, the computer will be asked to identify
significant patterns in the provided medical data about ALL. This
may be achieved through collecting the data necessary for the
system, determimng the data mining technique to be used for the
system, and choosing the most suitable implementation tool for the
domain.
The research makes use of a data mining tool, Clementine, so as to
apply Decision Trees technique. We feed it with data extracted from
real-life cases taken from specialized Cancer Institutes. Relevant
medical cases details such as patient medical history and diagnosis
are analyzed, classified, and clustered in order to improve the disease
management.
Abstract: In the paper the study of synthetic transmit aperture
method applying the Golay coded transmission for medical
ultrasound imaging is presented. Longer coded excitation allows to
increase the total energy of the transmitted signal without increasing
the peak pressure. Moreover signal-to-noise ratio and penetration
depth are improved while maintaining high ultrasound image
resolution. In the work the 128-element linear transducer array with
0.3 mm inter-element spacing excited by one cycle and the 8 and 16-
bit Golay coded sequences at nominal frequency 4 MHz was used. To
generate a spherical wave covering the full image region a single
element transmission aperture was used and all the elements received
the echo signals. The comparison of 2D ultrasound images of the
tissue mimicking phantom and in vitro measurements of the beef liver
is presented to illustrate the benefits of the coded transmission. The
results were obtained using the synthetic aperture algorithm with
transmit and receive signals correction based on a single element
directivity function.
Abstract: Protein 3D structure prediction has always been an
important research area in bioinformatics. In particular, the
prediction of secondary structure has been a well-studied research
topic. Despite the recent breakthrough of combining multiple
sequence alignment information and artificial intelligence algorithms
to predict protein secondary structure, the Q3 accuracy of various
computational prediction algorithms rarely has exceeded 75%. In a
previous paper [1], this research team presented a rule-based method
called RT-RICO (Relaxed Threshold Rule Induction from Coverings)
to predict protein secondary structure. The average Q3 accuracy on
the sample datasets using RT-RICO was 80.3%, an improvement
over comparable computational methods. Although this demonstrated
that RT-RICO might be a promising approach for predicting
secondary structure, the algorithm-s computational complexity and
program running time limited its use. Herein a parallelized
implementation of a slightly modified RT-RICO approach is
presented. This new version of the algorithm facilitated the testing of
a much larger dataset of 396 protein domains [2]. Parallelized RTRICO
achieved a Q3 score of 74.6%, which is higher than the
consensus prediction accuracy of 72.9% that was achieved for the
same test dataset by a combination of four secondary structure
prediction methods [2].
Abstract: Due to the recovering global economy, enterprises are
increasingly focusing on logistics. Investing in logistic measures for
a production generates a large potential for achieving a good starting
point within a competitive field. Unlike during the global economic
crisis, enterprises are now challenged with investing available capital
to maximize profits. In order to be able to create an informed and
quantifiably comprehensible basis for a decision, enterprises need an
adequate model for logistically and monetarily evaluating measures
in production. The Collaborate Research Centre 489 (SFB 489) at the
Institute for Production Systems (IFA) developed a Logistic
Information System which provides support in making decisions and
is designed specifically for the forging industry. The aim of a project
that has been applied for is to now transfer this process in order to
develop a universal approach to logistically and monetarily evaluate
measures in production.
Abstract: This paper presents a system for discovering
association rules from collections of unstructured documents called
EART (Extract Association Rules from Text). The EART system
treats texts only not images or figures. EART discovers association
rules amongst keywords labeling the collection of textual documents.
The main characteristic of EART is that the system integrates XML
technology (to transform unstructured documents into structured
documents) with Information Retrieval scheme (TF-IDF) and Data
Mining technique for association rules extraction. EART depends on
word feature to extract association rules. It consists of four phases:
structure phase, index phase, text mining phase and visualization
phase. Our work depends on the analysis of the keywords in the
extracted association rules through the co-occurrence of the keywords
in one sentence in the original text and the existing of the keywords
in one sentence without co-occurrence. Experiments applied on a
collection of scientific documents selected from MEDLINE that are
related to the outbreak of H5N1 avian influenza virus.
Abstract: Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
Abstract: Over the past few years, XML (eXtensible Mark-up
Language) has emerged as the standard for information
representation and data exchange over the Internet. This paper
provides a kick-start for new researches venturing in XML databases
field. We survey the storage representation for XML document,
review the XML query processing and optimization techniques with
respect to the particular storage instance. Various optimization
technologies have been developed to solve the query retrieval and
updating problems. Towards the later year, most researchers
proposed hybrid optimization techniques. Hybrid system opens the
possibility of covering each technology-s weakness by its strengths.
This paper reviews the advantages and limitations of optimization
techniques.
Abstract: Background: Widespread use of chemotherapeutic
drugs in the treatment of cancer has lead to higher health hazards
among employee who handle and administer such drugs, so nurses
should know how to protect themselves, their patients and their work
environment against toxic effects of chemotherapy. Aim of this study
was carried out to examine the effect of chemotherapy safety protocol
for oncology nurses on their protective measure practices. Design: A
quasi experimental research design was utilized. Setting: The study
was carried out in oncology department of Menoufia university
hospital and Tanta oncology treatment center. Sample: A
convenience sample of forty five nurses in Tanta oncology treatment
center and eighteen nurses in Menoufiya oncology department.
Tools: 1. an interviewing questionnaire that covering sociodemographic
data, assessment of unit and nurses' knowledge about
chemotherapy. II: Obeservational check list to assess nurses' actual
practices of handling and adminestration of chemotherapy. A base
line data were assessed before implementing Chemotherapy Safety
protocol, then Chemotherapy Safety protocol was implemented, and
after 2 monthes they were assessed again. Results: reveled that 88.9%
of study group I and 55.6% of study group II improved to good total
knowledge scores after educating on the safety protocol, also 95.6%
of study group I and 88.9% of study group II had good total practice
score after educating on the safety protocol. Moreover less than half
of group I (44.4%) reported that heavy workload is the most barriers
for them, while the majority of group II (94.4%) had many barriers
for adhering to the safety protocol such as they didn’t know the
protocol, the heavy work load and inadequate equipment.
Conclusions: Safety protocol for Oncology Nurses seemed to have
positive effect on improving nurses' knowledge and practice.
Recommendation: chemotherapy safety protocol should be instituted
for all oncology nurses who are working in any oncology unit and/ or
center to enhance compliance, and this protocol should be done at
frequent intervals.
Abstract: In this paper, cloud resource broker using goalbased
request in medical application is proposed. To handle recent
huge production of digital images and data in medical informatics
application, the cloud resource broker could be used by medical
practitioner for proper process in discovering and selecting correct
information and application. This paper summarizes several
reviewed articles to relate medical informatics application with
current broker technology and presents a research work in applying
goal-based request in cloud resource broker to optimize the use of
resources in cloud environment. The objective of proposing a new
kind of resource broker is to enhance the current resource
scheduling, discovery, and selection procedures. We believed that
it could help to maximize resources allocation in medical
informatics application.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: The present study was done primarily to address two major research gaps: firstly, development of an empirical measure of life meaningfulness for substance users and secondly, to determine the psychosocial determinants of life meaningfulness among the substance users. The study is classified into two phases: the first phase which dealt with development of Life Meaningfulness Scale and the second phase which examined the relationship between life meaningfulness and social support, abstinence self efficacy and depression. Both qualitative and quantitative approaches were used for framing items. A Principal Component Analysis yielded three components: Overall Goal Directedness, Striving for healthy lifestyle and Concern for loved ones which collectively accounted for 42.06% of the total variance. The scale and its subscales were also found to be highly reliable. Multiple regression analyses in the second phase of the study revealed that social support and abstinence self efficacy significantly predicted life meaningfulness among 48 recovering inmates of a de-addiction center while level of depression failed to predict life meaningfulness.
Abstract: Pairwise testing, which requires that every
combination of valid values of each pair of system factors be covered
by at lease one test case, plays an important role in software testing
since many faults are caused by unexpected 2-way interactions among
system factors. Although meta-heuristic strategies like simulated
annealing can generally discover smaller pairwise test suite, they may
cost more time to perform search, compared with greedy algorithms.
We propose a new method, improved Extremal Optimization (EO)
based on the Bak-Sneppen (BS) model of biological evolution, for
constructing pairwise test suites and define fitness function according
to the requirement of improved EO. Experimental results show that
improved EO gives similar size of resulting pairwise test suite and
yields an 85% reduction in solution time over SA.
Abstract: MATCH project [1] entitle the development of an
automatic diagnosis system that aims to support treatment of colon
cancer diseases by discovering mutations that occurs to tumour
suppressor genes (TSGs) and contributes to the development of
cancerous tumours. The constitution of the system is based on a)
colon cancer clinical data and b) biological information that will be
derived by data mining techniques from genomic and proteomic
sources The core mining module will consist of the popular, well
tested hybrid feature extraction methods, and new combined
algorithms, designed especially for the project. Elements of rough
sets, evolutionary computing, cluster analysis, self-organization maps
and association rules will be used to discover the annotations
between genes, and their influence on tumours [2]-[11].
The methods used to process the data have to address their high
complexity, potential inconsistency and problems of dealing with the
missing values. They must integrate all the useful information
necessary to solve the expert's question. For this purpose, the system
has to learn from data, or be able to interactively specify by a domain
specialist, the part of the knowledge structure it needs to answer a
given query. The program should also take into account the
importance/rank of the particular parts of data it analyses, and adjusts
the used algorithms accordingly.
Abstract: Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.
Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.