Abstract: Model mapping and transformation are important processes in high level system abstractions, and form the cornerstone of model-driven architecture (MDA) techniques. Considerable research in this field has devoted attention to static system abstraction, despite the fact that most systems are dynamic with high frequency changes in behavior. In this paper we provide an overview of work that has been done with regard to behavior model mapping and transformation, based on: (1) the completeness of the platform independent model (PIM); (2) semantics of behavioral models; (3) languages supporting behavior model transformation processes; and (4) an evaluation of model composition to effect the best approach to describing large systems with high complexity.
Abstract: A novel method of individual level adaptive mutation rate control called the rank-scaled mutation rate for genetic algorithms is introduced. The rank-scaled mutation rate controlled genetic algorithm varies the mutation parameters based on the rank of each individual within the population. Thereby the distribution of the fitness of the papulation is taken into consideration in forming the new mutation rates. The best fit mutate at the lowest rate and the least fit mutate at the highest rate. The complexity of the algorithm is of the order of an individual adaptation scheme and is lower than that of a self-adaptation scheme. The proposed algorithm is tested on two common problems, namely, numerical optimization of a function and the traveling salesman problem. The results show that the proposed algorithm outperforms both the fixed and deterministic mutation rate schemes. It is best suited for problems with several local optimum solutions without a high demand for excessive mutation rates.
Abstract: Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.
Abstract: In order to meet the limits imposed on automotive
emissions, engine control systems are required to constrain air/fuel
ratio (AFR) in a narrow band around the stoichiometric value, due to
the strong decay of catalyst efficiency in case of rich or lean mixture.
This paper presents a model of a sample spark ignition engine and
demonstrates Simulink-s capabilities to model an internal combustion
engine from the throttle to the crankshaft output. We used welldefined
physical principles supplemented, where appropriate, with
empirical relationships that describe the system-s dynamic behavior
without introducing unnecessary complexity. We also presents a PID
tuning method that uses an adaptive fuzzy system to model the
relationship between the controller gains and the target output
response, with the response specification set by desired percent
overshoot and settling time. The adaptive fuzzy based input-output
model is then used to tune on-line the PID gains for different
response specifications. Experimental results demonstrate that better
performance can be achieved with adaptive fuzzy tuning relative to
similar alternative control strategies. The actual response
specifications with adaptive fuzzy matched the desired response
specifications.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP-hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we concentrate an algorithm that
finds a near-optimal solution fast and we named this algorithm as
optimized Delay Constrained Routing (ODCR), which uses an
adaptive path weight function together with an additional constraint
imposed on the path cost, to restrict search space and hence ODCR
finds near optimal solution in much quicker time.
Abstract: This paper sets forth the possibility and importance about applying Data Mining in Web logs mining and shows some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm which has been used in Web logs mining widely. The new algorithm adds the property of the User ID during the every step of producing the candidate set and every step of scanning the database by which to decide whether an item in the candidate set should be put into the large set which will be used to produce next candidate set. At the meantime, in order to reduce the number of the database scanning, the new algorithm, by using the property of the Apriori algorithm, limits the size of the candidate set in time whenever it is produced. Test results show the improved algorithm has a more lower complexity of time and space, better restrain noise and fit the capacity of memory.
Abstract: In this paper, we evaluate the choice of suitable
quantization characteristics for both the decoder messages and the
received samples in Low Density Parity Check (LDPC) coded
systems using M-QAM (Quadrature Amplitude Modulation)
schemes. The analysis involves the demapper block that provides
initial likelihood values for the decoder, by relating its quantization
strategy of the decoder. A mapping strategy refers to the grouping of
bits within a codeword, where each m-bit group is used to select a
2m-ary signal in accordance with the signal labels. Further we
evaluate the system with mapping strategies like Consecutive-Bit
(CB) and Bit-Reliability (BR). A new demapper version, based on
approximate expressions, is also presented to yield a low complexity
hardware implementation.
Abstract: This research presents the development of simulation
modeling for WIP management in semiconductor fabrication.
Manufacturing simulation modeling is needed for productivity
optimization analysis due to the complex process flows involved
more than 35 percent re-entrance processing steps more than 15 times
at same equipment. Furthermore, semiconductor fabrication required
to produce high product mixed with total processing steps varies from
300 to 800 steps and cycle time between 30 to 70 days. Besides the
complexity, expansive wafer cost that potentially impact the
company profits margin once miss due date is another motivation to
explore options to experiment any analysis using simulation
modeling. In this paper, the simulation model is developed using
existing commercial software platform AutoSched AP, with
customized integration with Manufacturing Execution Systems
(MES) and Advanced Productivity Family (APF) for data collections
used to configure the model parameters and data source. Model
parameters such as processing steps cycle time, equipment
performance, handling time, efficiency of operator are collected
through this customization. Once the parameters are validated, few
customizations are made to ensure the prior model is executed. The
accuracy for the simulation model is validated with the actual output
per day for all equipments. The comparison analysis from result of
the simulation model compared to actual for achieved 95 percent
accuracy for 30 days. This model later was used to perform various
what if analysis to understand impacts on cycle time and overall
output. By using this simulation model, complex manufacturing
environment like semiconductor fabrication (fab) now have
alternative source of validation for any new requirements impact
analysis.
Abstract: Due to availability of powerful image processing software
and improvement of human computer knowledge, it becomes
easy to tamper images. Manipulation of digital images in different
fields like court of law and medical imaging create a serious problem
nowadays. Copy-move forgery is one of the most common types
of forgery which copies some part of the image and pastes it to
another part of the same image to cover an important scene. In
this paper, a copy-move forgery detection method proposed based
on Fourier transform to detect forgeries. Firstly, image is divided to
same size blocks and Fourier transform is performed on each block.
Similarity in the Fourier transform between different blocks provides
an indication of the copy-move operation. The experimental results
prove that the proposed method works on reasonable time and works
well for gray scale and colour images. Computational complexity
reduced by using Fourier transform in this method.
Abstract: This paper describes part of a project about Learningby-
Modeling (LbM). Studying complex systems is increasingly
important in teaching and learning many science domains. Many
features of complex systems make it difficult for students to develop
deep understanding. Previous research indicates that involvement
with modeling scientific phenomena and complex systems can play a
powerful role in science learning. Some researchers argue with this
view indicating that models and modeling do not contribute to
understanding complexity concepts, since these increases the
cognitive load on students. This study will investigate the effect of
different modes of involvement in exploring scientific phenomena
using computer simulation tools, on students- mental model from the
perspective of structure, behavior and function. Quantitative and
qualitative methods are used to report about 121 freshmen students
that engaged in participatory simulations about complex phenomena,
showing emergent, self-organized and decentralized patterns. Results
show that LbM plays a major role in students' concept formation
about complexity concepts.
Abstract: In this paper, a predator-prey model with time delay and habitat complexity is investigated. By analyzing the characteristic equations, the local stability of each feasible equilibria of the system is discussed and the existence of a Hopf bifurcation at the coexistence equilibrium is established. By choosing the sum of two delays as a bifurcation parameter, we show that Hopf bifurcations can occur as crosses some critical values. By deriving the equation describing the flow on the center manifold, we can determine the direction of the Hopf bifurcations and the stability of the bifurcating periodic solutions. Numerical simulations are carried out to illustrate the main theoretical results.
Abstract: Decision Feedback equalizers (DFEs) usually outperform linear equalizers for channels with intersymbol interference. However, the DFE performance is highly dependent on the availability of reliable past decisions. Hence, in coded systems, where reliable decisions are only available after decoding the full block, the performance of the DFE will be affected. A symbol based DFE is a DFE that only uses the decision after the block is decoded. In this paper we derive the optimal settings of both the feedforward and feedback taps of the symbol based equalizer. We present a novel symbol based DFE filterbank, and derive its taps optimal settings. We also show that it outperforms the classic DFE in terms of complexity and/or performance.
Abstract: In this article we explore how computer assisted exercises may allow for bridging the traditional gap between theory and practice in professional education. To educate officers able to master the complexity of the battlefield the Norwegian Military Academy needs to develop a learning environment that allows for creating viable connections between the educational environment and the field of practice. In response to this challenge we explore the conditions necessary to make computer assisted training systems (CATS) a useful tool to create structural similarities between an educational context and the field of military practice. Although, CATS may facilitate work procedures close to real life situations, this case do demonstrate how professional competence also must build on viable learning theories and environments. This paper explores the conditions that allow for using simulators to facilitate professional competence from within an educational setting. We develop a generic didactic model that ascribes learning to participation in iterative cycles of action and reflection. The development of this model is motivated by the need to develop an interdisciplinary professional education rooted in the pattern of military practice.
Abstract: Quantitative trait loci (QTL) experiments have yielded
important biological and biochemical information necessary for
understanding the relationship between genetic markers and
quantitative traits. For many years, most QTL algorithms only
allowed one observation per genotype. Recently, there has been an
increasing demand for QTL algorithms that can accommodate more
than one observation per genotypic distribution. The Bayesian
hierarchical model is very flexible and can easily incorporate this
information into the model. Herein a methodology is presented that
uses a Bayesian hierarchical model to capture the complexity of the
data. Furthermore, the Markov chain Monte Carlo model composition
(MC3) algorithm is used to search and identify important markers. An
extensive simulation study illustrates that the method captures the
true QTL, even under nonnormal noise and up to 6 QTL.
Abstract: In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Abstract: The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Abstract: All Text processing systems allow their users to
search a pattern of string from a given text. String matching is
fundamental to database and text processing applications. Every text
editor must contain a mechanism to search the current document for
arbitrary strings. Spelling checkers scan an input text for words in the
dictionary and reject any strings that do not match. We store our
information in data bases so that later on we can retrieve the same
and this retrieval can be done by using various string matching
algorithms. This paper is describing a new string matching algorithm
for various applications. A new algorithm has been designed with the
help of Rabin Karp Matcher, to improve string matching process.
Abstract: We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).
Abstract: Graph coloring is an important problem in computer
science and many algorithms are known for obtaining reasonably
good solutions in polynomial time. One method of comparing
different algorithms is to test them on a set of standard graphs where
the optimal solution is already known. This investigation analyzes a
set of 50 well known graph coloring instances according to a set of
complexity measures. These instances come from a variety of
sources some representing actual applications of graph coloring
(register allocation) and others (mycieleski and leighton graphs) that
are theoretically designed to be difficult to solve. The size of the
graphs ranged from ranged from a low of 11 variables to a high of
864 variables. The method used to solve the coloring problem was
the square of the adjacency (i.e., correlation) matrix. The results
show that the most difficult graphs to solve were the leighton and the
queen graphs. Complexity measures such as density, mobility,
deviation from uniform color class size and number of block
diagonal zeros are calculated for each graph. The results showed that
the most difficult problems have low mobility (in the range of .2-.5)
and relatively little deviation from uniform color class size.
Abstract: A minimal complexity version of component mode
synthesis is presented that requires simplified computer
programming, but still provides adequate accuracy for modeling
lower eigenproperties of large structures and their transient
responses. The novelty is that a structural separation into components
is done along a plane/surface that exhibits rigid-like behavior, thus
only normal modes of each component is sufficient to use, without
computing any constraint, attachment, or residual-attachment modes.
The approach requires only such input information as a few (lower)
natural frequencies and corresponding undamped normal modes of
each component. A novel technique is shown for formulation of
equations of motion, where a double transformation to generalized
coordinates is employed and formulation of nonproportional damping
matrix in generalized coordinates is shown.