Abstract: The ever-growing usage of aspect-oriented
development methodology in the field of software engineering
requires tool support for both research environments and industry. So
far, tool support for many activities in aspect-oriented software
development has been proposed, to automate and facilitate their
development. For instance, the AJaTS provides a transformation
system to support aspect-oriented development and refactoring. In
particular, it is well established that the abstract interpretation of
programs, in any paradigm, pursued in static analysis is best served
by a high-level programs representation, such as Control Flow Graph
(CFG). This is why such analysis can more easily locate common
programmatic idioms for which helpful transformation are already
known as well as, association between the input program and
intermediate representation can be more closely maintained.
However, although the current researches define the good concepts
and foundations, to some extent, for control flow analysis of aspectoriented
programs but they do not provide a concrete tool that can
solely construct the CFG of these programs. Furthermore, most of
these works focus on addressing the other issues regarding Aspect-
Oriented Software Development (AOSD) such as testing or data flow
analysis rather than CFG itself. Therefore, this study is dedicated to
build an aspect-oriented control flow graph construction tool called
AJcFgraph Builder. The given tool can be applied in many software
engineering tasks in the context of AOSD such as, software testing,
software metrics, and so forth.
Abstract: Consumer demand for products with low fat or sugar content and low levels of food additives, as well as cost factors, make exopolysaccharides (EPS) a viable alternative. EPS remain an interesting tool to modulate the sensory properties of yoghurt. This study was designed to evaluate EPS production potential of commercial yoghurt starter cultures (Yo-Flex starters: Harmony 1.0, TWIST 1.0 and YF-L902, Chr.Hansen, Denmark) and their influence on an apparent viscosity of yoghurt samples. The production of intracellularly synthesized EPS by different commercial yoghurt starters varies roughly from 144,08 to 440,81 mg/l. Analysing starters’ producing EPS, they showed large variations in concentration and supposedly composition. TWIST 1.0 had produced greater amounts of EPS in MRS medium and in yoghurt samples but there wasn’t determined significant contribution to development of texture as well as an apparent viscosity of the final product. YF-L902 and Harmony 1.0 starters differed considerably in EPS yields, but not in apparent viscosities (p>0.05) of the final yoghurts. Correlation between EPS concentration and viscosity of yoghurt samples was not established in the study.
Abstract: In this paper, we develop a Spatio-Temporal graph as
of a key component of our knowledge representation Scheme. We
design an integrated representation Scheme to depict not only present
and past but future in parallel with the spaces in an effective and
intuitive manner. The resulting multi-dimensional comprehensive
knowledge structure accommodates multi-layered virtual world
developing in the time to maximize the diversity of situations in the
historical context. This knowledge representation Scheme is to be used
as the basis for simulation of situations composing the virtual world
and for implementation of virtual agents' knowledge used to judge and
evaluate the situations in the virtual world. To provide natural contexts
for situated learning or simulation games, the virtual stage set by this
Spatio-Temporal graph is to be populated by agents and other objects
interrelated and changing which are abstracted in the ontology.
Abstract: This paper describes the optimization of a complex
dairy farm simulation model using two quite different methods of
optimization, the Genetic algorithm (GA) and the Lipschitz
Branch-and-Bound (LBB) algorithm. These techniques have been
used to improve an agricultural system model developed by Dexcel
Limited, New Zealand, which describes a detailed representation of
pastoral dairying scenarios and contains an 8-dimensional parameter
space. The model incorporates the sub-models of pasture growth and
animal metabolism, which are themselves complex in many cases.
Each evaluation of the objective function, a composite 'Farm
Performance Index (FPI)', requires simulation of at least a one-year
period of farm operation with a daily time-step, and is therefore
computationally expensive. The problem of visualization of the
objective function (response surface) in high-dimensional spaces is
also considered in the context of the farm optimization problem.
Adaptations of the sammon mapping and parallel coordinates
visualization are described which help visualize some important
properties of the model-s output topography. From this study, it is
found that GA requires fewer function evaluations in optimization
than the LBB algorithm.
Abstract: This paper presents the effects of migration at the
urban sites with an integrated model under the sustainable local
development policies for the conservation and revitalization of the
site areas as a case at Reyhan heritage site in Bursa. It is known as
the “City of immigrants" because of its richness of cultural plurality.
The city has always regarded the dynamic impact of immigration as a
positive contribution. As a result of this situation, the city created the
earliest urbanization practices: being the first capital city of the
Ottoman Empire. Bursa created the first modern movement practices
and set the first Organized Industrial Zone. The most important aim
of the study is to be offer a model for the similar areas with the
context of conservation and revitalization of the historical areas,
subjected to the local integrated sustainable development policies of
local goverments.
Abstract: A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.
Abstract: The Major Depressive Disorder has been a burden of
medical expense in Taiwan as well as the situation around the world.
Major Depressive Disorder can be defined into different categories by
previous human activities. According to machine learning, we can
classify emotion in correct textual language in advance. It can help
medical diagnosis to recognize the variance in Major Depressive
Disorder automatically. Association language incremental is the
characteristic and relationship that can discovery words in sentence.
There is an overlapping-category problem for classification. In this
paper, we would like to improve the performance in classification in
principle of no overlapping-category problems. We present an
approach that to discovery words in sentence and it can find in high
frequency in the same time and can-t overlap in each category, called
Association Language Features by its Category (ALFC).
Experimental results show that ALFC distinguish well in Major
Depressive Disorder and have better performance. We also compare
the approach with baseline and mutual information that use single
words alone or correlation measure.
Abstract: A DEA model can generally evaluate the performance
using multiple inputs and outputs for the same period. However, it is
hard to avoid the production lead time phenomenon some times, such
as long-term project or marketing activity. A couple of models have
been suggested to capture this time lag issue in the context of DEA.
This paper develops a dual-MPO model to deal with time lag effect in
evaluating efficiency. A numerical example is also given to show that
the proposed model can be used to get efficiency and reference set of
inefficient DMUs and to obtain projected target value of input
attributes for inefficient DMUs to be efficient.
Abstract: Using a methodology grounded in business process
change theory, we investigate the critical success factors that affect
ERP implementation success in United States and India.
Specifically, we examine the ERP implementation at two case study
companies, one in each country. Our findings suggest that certain
factors that affect the success of ERP implementations are not
culturally bound, whereas some critical success factors depend on the
national culture of the country in which the system is being
implemented. We believe that the understanding of these critical
success factors will deepen the understanding of ERP
implementations and will help avoid implementation mistakes,
thereby increasing the rate of success in culturally different contexts.
Implications of the findings and future research directions for both
academicians and practitioners are also discussed.
Abstract: With the proliferation of multi-channel retailing, developing a better understanding of the factors that affect customers- purchase behaviors within a multi-channel retail context has become an important topic for practitioners and academics. While many studies have investigated the various customer behaviors associated with brick-and-mortar retailing, online retailing, and brick-and-click retailing, little research has explored how customer shopping value perceptions influence online purchase behaviors within the TV-and-online retail environment. The main purpose of this study is to investigate the influence of TV and online shopping values on online patronage intention. Data collected from 116 respondents in Taiwan are tested against the research model using the partial least squares (PLS) approach. The results indicate that utilitarian and hedonic TV shopping values have indirect, positive influences on online patronage intention through their online counterparts in the TV-and-online retail context. The findings of this study provide several important theoretical and practical implications for multi-channel retailing.
Abstract: The state-of-the-art Bag of Words model in Content-
Based Image Retrieval has been used for years but the relevance
feedback strategies for this model are not fully investigated. Inspired
from text retrieval, the Bag of Words model has the ability to use the
wealth of knowledge and practices available in text retrieval. We
study and experiment the relevance feedback model in text retrieval
for adapting it to image retrieval. The experiments show that the
techniques from text retrieval give good results for image retrieval
and that further improvements is possible.
Abstract: This paper proposes the concept of aerocapture with
aerodynamic-environment-adaptive variable geometry flexible
aeroshell that vehicle deploys. The flexible membrane is composed
of thin-layer film or textile as its aeroshell in order to solve some
problems obstructing realization of aerocapture technique.
Multi-objective optimization study is conducted to investigate
solutions and derive design guidelines. As a result, solutions which
can avoid aerodynamic heating and enlarge the corridor width up
to 10% are obtained successfully, so that the effectiveness of this
concept can be demonstrated. The deformation-use optimum
solution changes its drag coefficient from 1.6 to 1.1, along with the
change in dynamic pressure. Moreover, optimization results show
that deformation-use solution requires the membrane for which
upper temperature limit and strain limit are more than 700 K and
120%, respectively, and elasticity (Young-s modulus) is of order of
106 Pa.
Abstract: EGOTHOR is a search engine that indexes the Web
and allows us to search the Web documents. Its hit list contains URL
and title of the hits, and also some snippet which tries to shortly
show a match. The snippet can be almost always assembled by an
algorithm that has a full knowledge of the original document (mostly
HTML page). It implies that the search engine is required to store
the full text of the documents as a part of the index.
Such a requirement leads us to pick up an appropriate compression
algorithm which would reduce the space demand. One of the solutions
could be to use common compression methods, for instance gzip or
bzip2, but it might be preferable if we develop a new method which
would take advantage of the document structure, or rather, the textual
character of the documents.
There already exist a special compression text algorithms and
methods for a compression of XML documents. The aim of this
paper is an integration of the two approaches to achieve an optimal
level of the compression ratio
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: A catastrophic earthquake measuring 6.3 on the
Richter scale struck the Christchurch, New Zealand Central Business
District on February 22, 2012, abruptly disrupting the business of
teaching and learning at Christchurch Polytechnic Institute of
Technology. This paper presents the findings from a study
undertaken about the complexity of delivering an educational
programme in the face of this traumatic natural event. Nine
interconnected themes emerged from this multiple method study:
communication, decision making, leader- and follower-ship,
balancing personal and professional responsibilities, taking action,
preparedness and thinking ahead, all within a disruptive and uncertain
context. Sustainable responses that maximise business continuity, and
provide solutions to practical challenges, are among the study-s
recommendations.
Abstract: This paper is a continuation of our daily energy peak load forecasting approach using our modified network which is part of the recurrent networks family and is called feed forward and feed back multi context artificial neural network (FFFB-MCANN). The inputs to the network were exogenous variables such as the previous and current change in the weather components, the previous and current status of the day and endogenous variables such as the past change in the loads. Endogenous variable such as the current change in the loads were used on the network output. Experiment shows that using endogenous and exogenous variables as inputs to the FFFBMCANN rather than either exogenous or endogenous variables as inputs to the same network produces better results. Experiments show that using the change in variables such as weather components and the change in the past load as inputs to the FFFB-MCANN rather than the absolute values for the weather components and past load as inputs to the same network has a dramatic impact and produce better accuracy.
Abstract: Short Message Service (SMS) has grown in
popularity over the years and it has become a common way of
communication, it is a service provided through General System
for Mobile Communications (GSM) that allows users to send text
messages to others.
SMS is usually used to transport unclassified information, but
with the rise of mobile commerce it has become a popular tool for
transmitting sensitive information between the business and its
clients. By default SMS does not guarantee confidentiality and
integrity to the message content.
In the mobile communication systems, security (encryption)
offered by the network operator only applies on the wireless link.
Data delivered through the mobile core network may not be
protected. Existing end-to-end security mechanisms are provided
at application level and typically based on public key
cryptosystem.
The main concern in a public-key setting is the authenticity of
the public key; this issue can be resolved by identity-based (IDbased)
cryptography where the public key of a user can be derived
from public information that uniquely identifies the user.
This paper presents an encryption mechanism based on the IDbased
scheme using Elliptic curves to provide end-to-end security
for SMS. This mechanism has been implemented over the standard
SMS network architecture and the encryption overhead has been
estimated and compared with RSA scheme. This study indicates
that the ID-based mechanism has advantages over the RSA
mechanism in key distribution and scalability of increasing
security level for mobile service.
Abstract: Selection of the best possible set of suppliers has a
significant impact on the overall profitability and success of any
business. For this reason, it is usually necessary to optimize all
business processes and to make use of cost-effective alternatives for
additional savings. This paper proposes a new efficient context-aware
supplier selection model that takes into account possible changes of
the environment while significantly reducing selection costs. The
proposed model is based on data clustering techniques while
inspiring certain principles of online algorithms for an optimally
selection of suppliers. Unlike common selection models which re-run
the selection algorithm from the scratch-line for any decision-making
sub-period on the whole environment, our model considers the
changes only and superimposes it to the previously defined best set
of suppliers to obtain a new best set of suppliers. Therefore, any recomputation
of unchanged elements of the environment is avoided
and selection costs are consequently reduced significantly. A
numerical evaluation confirms applicability of this model and proves
that it is a more optimal solution compared with common static
selection models in this field.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: This study demonstrates the use of Class F fly ash in
combination with lime or lime kiln dust in the full depth reclamation
(FDR) of asphalt pavements. FDR, in the context of this paper, is a
process of pulverizing a predetermined amount of flexible pavement
that is structurally deficient, blending it with chemical additives and
water, and compacting it in place to construct a new stabilized base
course. Test sections of two structurally deficient asphalt pavements
were reclaimed using Class F fly ash in combination with lime and
lime kiln dust. In addition, control sections were constructed using
cement, cement and emulsion, lime kiln dust and emulsion, and mill
and fill. The service performance and structural behavior of the FDR
pavement test sections were monitored to determine how the fly ash
sections compared to other more traditional pavement rehabilitation
techniques. Service performance and structural behavior were
determined with the use of sensors embedded in the road and Falling
Weight Deflectometer (FWD) tests. Monitoring results of the FWD
tests conducted up to 2 years after reclamation show that the cement,
fly ash+LKD, and fly ash+lime sections exhibited two year resilient
modulus values comparable to open graded cement stabilized
aggregates (more than 750 ksi). The cement treatment resulted in a
significant increase in resilient modulus within 3 weeks of
construction and beyond this curing time, the stiffness increase was
slow. On the other hand, the fly ash+LKD and fly ash+lime test
sections indicated slower shorter-term increase in stiffness. The fly
ash+LKD and fly ash+lime section average resilient modulus values
at two years after construction were in excess of 800 ksi. Additional
longer-term testing data will be available from ongoing pavement
performance and environmental condition data collection at the two
pavement sites.