Abstract: The main aim of this research is to investigate a novel technique for implementing a more natural and intelligent conversation system. Conversation systems are designed to converse like a human as much as their intelligent allows. Sometimes, we can think that they are the embodiment of Turing-s vision. It usually to return a predetermined answer in a predetermined order, but conversations abound with uncertainties of various kinds. This research will focus on an integrated natural language processing approach. This approach includes an integrated knowledge-base construction module, a conversation understanding and generator module, and a state manager module. We discuss effectiveness of this approach based on an experiment.
Abstract: The aim of this paper is to present a methodology in
three steps to forecast supply chain demand. In first step, various data
mining techniques are applied in order to prepare data for entering
into forecasting models. In second step, the modeling step, an
artificial neural network and support vector machine is presented
after defining Mean Absolute Percentage Error index for measuring
error. The structure of artificial neural network is selected based on
previous researchers' results and in this article the accuracy of
network is increased by using sensitivity analysis. The best forecast
for classical forecasting methods (Moving Average, Exponential
Smoothing, and Exponential Smoothing with Trend) is resulted based
on prepared data and this forecast is compared with result of support
vector machine and proposed artificial neural network. The results
show that artificial neural network can forecast more precisely in
comparison with other methods. Finally, forecasting methods'
stability is analyzed by using raw data and even the effectiveness of
clustering analysis is measured.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: While the problem based learning (PBL) approach promotes unsupervised self-directed learning (SDL), many students experience difficulty juggling the role of being an information recipient and information seeker. Logbooks have been used to assess trainee doctors but not in other areas. This study aimed to determine the effectiveness of logbook for assessing SDL during PBL sessions in first year medical students. The log book included a learning checklist and knowledge and skills components. Comparisons with the baseline assessment of student performance in PBL and that at semester end after logbook intervention showed significant improvements in student performance (31.5 ± 8 vs. 17.7 ± 4.4; p
Abstract: The paper reviews the relationship between spatial
and transportation planning in the Southern African Development
Community (SADC) region of Sub-Saharan Africa. It argues that
most urbanisation in the region has largely occurred subsequent to
the 1950s and, accordingly, urban development has been
profoundly and negatively affected by the (misguided) spatial and
institutional tenets of modernism. It demonstrates how a
considerable amount of the poor performance of these settlements
can be directly attributed to this. Two factors in particular about the
planning systems are emphasized: the way in which programmatic
land-use planning lies at the heart of both spatial and transportation
planning; and the way on which transportation and spatial planning
have been separated into independent processes. In the final
section, the paper identifies ways of improving the planning
system. Firstly, it identifies the performance qualities which
Southern African settlements should be seeking to achieve.
Secondly, it focuses on two necessary arenas of change: the need to
replace programmatic land-use planning practices with structuralspatial
approaches; and it makes a case for making urban corridors
a spatial focus of integrated planning, as a way of beginning the
restructuring and intensification of settlements which are currently
characterised by sprawl, fragmentation and separation
Abstract: One of the criteria in production scheduling is Make
Span, minimizing this criteria causes more efficiently use of the
resources specially machinery and manpower. By assigning some
budget to some of the operations the operation time of these activities
reduces and affects the total completion time of all the operations
(Make Span). In this paper this issue is practiced in parallel flow
shops. At first we convert parallel flow shop to a network model and
by using a linear programming approach it is identified in order to
minimize make span (the completion time of the network) which
activities (operations) are better to absorb the predetermined and
limited budget. Minimizing the total completion time of all the
activities in the network is equivalent to minimizing make span in
production scheduling.
Abstract: NFκB activation plays a crucial role in anti-apoptotic responses in response to the apoptotic signaling during tumor necrosis factor (TNFa) stimulation in Multiple Myeloma (MM). Although several drugs have been found effective for the treatment of MM by mainly inhibiting NFκB pathway, there are no any quantitative or qualitative results of comparison assessment on inhibition effect between different single drugs or drug combinations. Computational modeling is becoming increasingly indispensable for applied biological research mainly because it can provide strong quantitative predicting power. In this study, a novel computational pathway modeling approach is employed to comparably assess the inhibition effects of specific single drugs and drug combinations on the NFκB pathway in MM, especially the prediction of synergistic drug combinations.
Abstract: Information is increasing in volumes; companies are overloaded with information that they may lose track in getting the intended information. It is a time consuming task to scan through each of the lengthy document. A shorter version of the document which contains only the gist information is more favourable for most information seekers. Therefore, in this paper, we implement a text summarization system to produce a summary that contains gist information of oil and gas news articles. The summarization is intended to provide important information for oil and gas companies to monitor their competitor-s behaviour in enhancing them in formulating business strategies. The system integrated statistical approach with three underlying concepts: keyword occurrences, title of the news article and location of the sentence. The generated summaries were compared with human generated summaries from an oil and gas company. Precision and recall ratio are used to evaluate the accuracy of the generated summary. Based on the experimental results, the system is able to produce an effective summary with the average recall value of 83% at the compression rate of 25%.
Abstract: Background: Dialign is a DNA/Protein alignment tool
for performing pairwise and multiple pairwise alignments through the
comparison of gap-free segments (fragments) between sequence
pairs. An alignment of two sequences is a chain of fragments, i.e
local gap-free pairwise alignments, with the highest total score.
METHOD: A new approach is defined in this article which relies on
the concept of using three-dimensional fragments – i.e. local threeway
alignments -- in the alignment process instead of twodimensional
ones. These three-dimensional fragments are gap-free
alignments constituting of equal-length segments belonging to three
distinct sequences. RESULTS: The obtained results showed good
improvments over the performance of DIALIGN.
Abstract: Developing countries are facing a problem of slums and there appears to be no fool proof solution to eradicate them. For improving the quality of life there are three approaches of slum development and In-situ up-gradation approach is found to be the best one, while the relocation approach has proved to be failure. Factors responsible for failure of relocation projects are needed to be assessed, which is the basic aim of the paper. Factors responsible for failure of relocation projects are loss of livelihood, security of tenure and inefficiency of the Government. These factors are traced out & mapped from the examples of Western & Indian cities. National habitat, Resettlement policy emphasized relationship between shelter and work place. SRA has identified 55 slums for relocation due reservation of land uses, security of tenure and non- notified status of slums. The policy guidelines have been suggested for successful relocation projects. KeywordsLivelihood, Relocation, Slums, Urban poor.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: By introducing the concept of Oracle we propose an approach for improving the performance of genetic algorithms for large-scale asymmetric Traveling Salesman Problems. The results have shown that the proposed approach allows overcoming some traditional problems for creating efficient genetic algorithms.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques to extract phrases
from documents and the query and doing stemming on words. Then
an ontology based conceptual method will be used to annotate
documents and expand the query. To expand a query the spread
activation algorithm is improved so that the expansion can be done
flexible and in various aspects. The annotated documents and the
expanded query will be processed to compute the relevance degree
exploiting statistical methods. The outstanding features of our
approach are (1) combining conceptual, statistical and linguistic
features of documents, (2) expanding the query with its related
concepts before comparing to documents, (3) extracting and using
both words and phrases to compute relevance degree, (4) improving
the spread activation algorithm to do the expansion based on
weighted combination of different conceptual relationships and (5)
allowing variable document vector dimensions. A ranking system
called ORank is developed to implement and test the proposed
model. The test results will be included at the end of the paper.
Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.
Abstract: Reliability is one of the most important quality attributes of software. Based on the approach of Reussner and the approach of Cheung, we proposed the reliability prediction model of component-based software architectures. Also, the value of the model is shown through the experimental evaluation on a web server system.
Abstract: The purposes of this paper are to (1) promote excellence in computer science by suggesting a cohesive innovative approach to fill well documented deficiencies in current computer science education, (2) justify (using the authors' and others anecdotal evidence from both the classroom and the real world) why this approach holds great potential to successfully eliminate the deficiencies, (3) invite other professionals to join the authors in proof of concept research. The authors' experiences, though anecdotal, strongly suggest that a new approach involving visual modeling technologies should allow computer science programs to retain a greater percentage of prospective and declared majors as students become more engaged learners, more successful problem-solvers, and better prepared as programmers. In addition, the graduates of such computer science programs will make greater contributions to the profession as skilled problem-solvers. Instead of wearily rememorizing code as they move to the next course, students will have the problem-solving skills to think and work in more sophisticated and creative ways.
Abstract: Measurement of competitiveness between countries or regions is an important topic of many economic analysis and scientific papers. In European Union (EU), there is no mainstream approach of competitiveness evaluation and measuring. There are many opinions and methods of measurement and evaluation of competitiveness between states or regions at national and European level. The methods differ in structure of using the indicators of competitiveness and ways of their processing. The aim of the paper is to analyze main sources of competitive potential of the EU Member States with the help of Factor analysis (FA) and to classify the EU Member States to homogeneous units (clusters) according to the similarity of selected indicators of competitiveness factors by Cluster analysis (CA) in reference years 2000 and 2011. The theoretical part of the paper is devoted to the fundamental bases of competitiveness and the methodology of FA and CA methods. The empirical part of the paper deals with the evaluation of competitiveness factors in the EU Member States and cluster comparison of evaluated countries by cluster analysis.
Abstract: This paper proposes a “soft systems" approach to
domain-driven design of computer-based information systems. We
propose a systemic framework combining techniques from Soft
Systems Methodology (SSM), the Unified Modelling Language
(UML), and an implementation pattern known as “Naked Objects".
We have used this framework in action research projects that have
involved the investigation and modelling of business processes using
object-oriented domain models and the implementation of software
systems based on those domain models. Within the proposed
framework, Soft Systems Methodology (SSM) is used as a guiding
methodology to explore the problem situation and to generate a
ubiquitous language (soft language) which can be used as the basis
for developing an object-oriented domain model. The domain model
is further developed using techniques based on the UML and is
implemented in software following the “Naked Objects"
implementation pattern. We argue that there are advantages from
combining and using techniques from different methodologies in this
way.
The proposed systemic framework is overviewed and justified as
multimethodologyusing Mingers multimethodology ideas.
This multimethodology approach is being evaluated through a
series of action research projects based on real-world case studies. A
Peer-Tutoring case study is presented here as a sample of the
framework evaluation process
Abstract: With the development of the Internet, E-commerce is
growing at an exponential rate, and lots of online stores are built up to
sell their goods online. A major factor influencing the successful
adoption of E-commerce is consumer-s trust. For new or unknown
Internet business, consumers- lack of trust has been cited as a major
barrier to its proliferation. As web sites provide key interface for
consumer use of E-Commerce, we investigate the design of web site to
build trust in E-Commerce from a design science approach. A
conceptual model is proposed in this paper to describe the ontology of
online transaction and human-computer interaction. Based on this
conceptual model, we provide a personalized webpage design
approach using Bayesian networks learning method. Experimental
evaluation are designed to show the effectiveness of web
personalization in improving consumer-s trust in new or unknown
online store.
Abstract: This paper unifies power optimization approaches in
various energy converters, such as: thermal, solar, chemical, and
electrochemical engines, in particular fuel cells. Thermodynamics
leads to converter-s efficiency and limiting power. Efficiency
equations serve to solve problems of upgrading and downgrading of
resources. While optimization of steady systems applies the
differential calculus and Lagrange multipliers, dynamic optimization
involves variational calculus and dynamic programming. In reacting
systems chemical affinity constitutes a prevailing component of an
overall efficiency, thus the power is analyzed in terms of an active
part of chemical affinity. The main novelty of the present paper in the
energy yield context consists in showing that the generalized heat
flux Q (involving the traditional heat flux q plus the product of
temperature and the sum products of partial entropies and fluxes of
species) plays in complex cases (solar, chemical and electrochemical)
the same role as the traditional heat q in pure heat engines.
The presented methodology is also applied to power limits in fuel
cells as to systems which are electrochemical flow engines propelled
by chemical reactions. The performance of fuel cells is determined by
magnitudes and directions of participating streams and mechanism of
electric current generation. Voltage lowering below the reversible
voltage is a proper measure of cells imperfection. The voltage losses,
called polarization, include the contributions of three main sources:
activation, ohmic and concentration. Examples show power maxima
in fuel cells and prove the relevance of the extension of the thermal
machine theory to chemical and electrochemical systems. The main
novelty of the present paper in the FC context consists in introducing
an effective or reduced Gibbs free energy change between products p
and reactants s which take into account the decrease of voltage and
power caused by the incomplete conversion of the overall reaction.