Abstract: This paper studies the design of a simple constellation
precoding for a multiple-input multiple-output orthogonal frequency
division multiplexing (MIMO-OFDM) system over Rayleigh fading
channels where OFDM is used to keep the diversity replicas orthogonal
and reduce ISI effects. A multi-user environment with K synchronous
co-channel users is considered. The proposed scheme provides
a bandwidth efficient transmission for individual users by increasing
the system throughput. In comparison with the existing coded
MIMO-OFDM schemes, the precoding technique is designed under
the consideration of its low implementation complexity while providing
a comparable error performance to the existing schemes.
Analytic and simulation results have been presented to show the distinguished
error performance.
Abstract: This paper presents a formalisation of the different existing code mutation techniques (polymorphism and metamorphism) by means of formal grammars. While very few theoretical results are known about the detection complexity of viral mutation techniques, we exhaustively address this critical issue by considering the Chomsky classification of formal grammars. This enables us to determine which family of code mutation techniques are likely to be detected or on the contrary are bound to remain undetected. As an illustration we then present, on a formal basis, a proof-of-concept metamorphic mutation engine denoted PB MOT, whose detection has been proven to be undecidable.
Abstract: A new generation of manufacturing machines
so-called MIMCA (modular and integrated machine control
architecture) capable of handling much increased complexity in
manufacturing control-systems is presented. Requirement for more
flexible and effective control systems for manufacturing machine
systems is investigated and dimensioned-which highlights a need for
improved means of coordinating and monitoring production
machinery and equipment used to- transport material. The MIMCA
supports simulation based on machine modeling, was conceived by
the authors to address the issues. Essentially MIMCA comprises an
organized unification of selected architectural frameworks and
modeling methods, which include: NISTRCS, UMC and Colored
Timed Petri nets (CTPN). The unification has been achieved; to
support the design and construction of hierarchical and distributed
machine control which realized the concurrent operation of reusable
and distributed machine control components; ability to handle
growing complexity; and support requirements for real- time control
systems. Thus MIMCA enables mapping between 'what a machine
should do' and 'how the machine does it' in a well-defined but
flexible way designed to facilitate reconfiguration of machine
systems.
Abstract: Most real world systems express themselves formally
as a set of nonlinear algebraic equations. As applications grow, the
size and complexity of these equations also increase. In this work, we
highlight the key concepts in using the homotopy analysis method
as a methodology used to construct efficient iteration formulas for
nonlinear equations solving. The proposed method is experimentally
characterized according to a set of determined parameters which
affect the systems. The experimental results show the potential and
limitations of the new method and imply directions for future work.
Abstract: Nowadays, manufacturers are facing great challenges
with regard to the production of green products due to the emerging issue of hazardous substance management (HSM). In particular,
environmental legislation pressures have yielded to increased risk,
manufacturing complexity and green components demands. The green principles were expanded to many departments within
organization, including supply chain. Green supply chain
management (GSCM) was emerging in the last few years. This idea
covers every stage in manufacturing from the first to the last stage of
life cycle. From product lifecycle concept, the cycle starts at the design of a product. QFD is a customer-driven product development
tool, considered as a structured management approach for efficiently
translating customer needs into design requirements and parts deployment, as well as manufacturing plans and controls in order to
achieve higher customer satisfaction. This paper develops an Eco-
QFD to provide a framework for designing Eco-mobile phone by integrating the life cycle analysis LCA into QFD throughout the entire product development process.
Abstract: A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.
Abstract: Several optimization algorithms specifically applied to
the problem of Operation Planning of Hydrothermal Power Systems
have been developed and are used. Although providing solutions to
various problems encountered, these algorithms have some
weaknesses, difficulties in convergence, simplification of the original
formulation of the problem, or owing to the complexity of the
objective function. Thus, this paper presents the development of a
computational tool for solving optimization problem identified and to
provide the User an easy handling. Adopted as intelligent
optimization technique, Genetic Algorithms and programming
language Java. First made the modeling of the chromosomes, then
implemented the function assessment of the problem and the
operators involved, and finally the drafting of the graphical interfaces
for access to the User. The program has managed to relate a coherent
performance in problem resolution without the need for
simplification of the calculations together with the ease of
manipulating the parameters of simulation and visualization of output
results.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.
Abstract: This paper proposes a novel frequency offset (FO) estimator for orthogonal frequency division multiplexing. Simplicity is most significant feature of this algorithm and can be repeated to achieve acceptable accuracy. Also fractional and integer part of FO is estimated jointly with use of the same algorithm. To do so, instead of using conventional algorithms that usually use correlation function, we use DFT of received signal. Therefore, complexity will be reduced and we can do synchronization procedure by the same hardware that is used to demodulate OFDM symbol. Finally, computer simulation shows that the accuracy of this method is better than other conventional methods.
Abstract: This paper presents the decoder design for the single error correcting and double error detecting code proposed by the authors in an earlier paper. The speed of error detection and correction of a code is largely dependent upon the associated encoder and decoder circuits. The complexity and the speed of such circuits are determined by the number of 1?s in the parity check matrix (PCM). The number of 1?s in the parity check matrix for the code proposed by the authors are fewer than in any currently known single error correcting/double error detecting code. This results in simplified encoding and decoding circuitry for error detection and correction.
Abstract: Clustering large populations is an important problem
when the data contain noise and different shapes. A good clustering
algorithm or approach should be efficient enough to detect clusters
sensitively. Besides space complexity, time complexity also gains
importance as the size grows. Using hierarchies we developed a new
algorithm to split attributes according to the values they have and
choosing the dimension for splitting so as to divide the database
roughly into equal parts as much as possible. At each node we
calculate some certain descriptive statistical features of the data
which reside and by pruning we generate the natural clusters with a
complexity of O(n).
Abstract: The clustering ensembles combine multiple partitions
generated by different clustering algorithms into a single clustering
solution. Clustering ensembles have emerged as a prominent method
for improving robustness, stability and accuracy of unsupervised
classification solutions. So far, many contributions have been done to
find consensus clustering. One of the major problems in clustering
ensembles is the consensus function. In this paper, firstly, we
introduce clustering ensembles, representation of multiple partitions,
its challenges and present taxonomy of combination algorithms.
Secondly, we describe consensus functions in clustering ensembles
including Hypergraph partitioning, Voting approach, Mutual
information, Co-association based functions and Finite mixture
model, and next explain their advantages, disadvantages and
computational complexity. Finally, we compare the characteristics of
clustering ensembles algorithms such as computational complexity,
robustness, simplicity and accuracy on different datasets in previous
techniques.
Abstract: Vision-based intelligent vehicle applications often require large amounts of memory to handle video streaming and image processing, which in turn increases complexity of hardware and software. This paper presents an FPGA implement of a vision-based blind spot warning system. Using video frames, the information of the blind spot area turns into one-dimensional information. Analysis of the estimated entropy of image allows the detection of an object in time. This idea has been implemented in the XtremeDSP video starter kit. The blind spot warning system uses only 13% of its logic resources and 95k bits block memory, and its frame rate is over 30 frames per sec (fps).
Abstract: In order to optimize annual IT spending and to reduce
the complexity of an entire system architecture, SOA trials have been
started. It is common knowledge that to design an SOA system we
have to adopt the top-down approach, but in reality silo systems are
being made, so these companies cannot reuse newly designed services,
and cannot enjoy SOA-s economic benefits. To prevent this situation,
we designed a generic SOA development process referred to as the
architecture of “mass customization."
To define the generic detail development processes, we did a case
study on an imaginary company. Through the case study, we could
define the practical development processes and found this could vastly
reduce updating development costs.
Abstract: Societal security, continuity scenarios and methodological cycling approach explained in this article. Namely societal security organizational challenges ask implementation of international standards BS 25999-2 & global ISO 22300 which is a family of standards for business continuity management system. Efficient global organization system is distinguished of high entity´s complexity, connectivity & interoperability, having not only cooperative relations in a fact. Competing business have numerous participating ´enemies´, which are in apparent or hidden opponent and antagonistic roles with prosperous organization system, resulting to a crisis scene or even to a battle theatre. Organization business continuity scenarios are necessary for such ´a play´ preparedness, planning, management & overmastering in real environments.
Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: An efficient architecture for low jitter All Digital
Phase Locked Loop (ADPLL) suitable for high speed SoC
applications is presented in this paper. The ADPLL is designed using
standard cells and described by Hardware Description Language
(HDL). The ADPLL implemented in a 90 nm CMOS process can
operate from 10 to 200 MHz and achieve worst case frequency
acquisition in 14 reference clock cycles. The simulation result shows
that PLL has cycle to cycle jitter of 164 ps and period jitter of 100 ps
at 100MHz. Since the digitally controlled oscillator (DCO) can
achieve both high resolution and wide frequency range, it can meet
the demands of system-level integration. The proposed ADPLL can
easily be ported to different processes in a short time. Thus, it can
reduce the design time and design complexity of the ADPLL, making
it very suitable for System-on-Chip (SoC) applications.
Abstract: In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: Recently, a great amount of interest has been shown
in the field of modeling and controlling hybrid systems. One of the
efficient and common methods in this area utilizes the mixed logicaldynamical
(MLD) systems in the modeling. In this method, the
system constraints are transformed into mixed-integer inequalities by
defining some logic statements. In this paper, a system containing
three tanks is modeled as a nonlinear switched system by using the
MLD framework. Comparing the model size of the three-tank system
with that of a two-tank system, it is deduced that the number of
binary variables, the size of the system and its complexity
tremendously increases with the number of tanks, which makes the
control of the system more difficult. Therefore, methods should be
found which result in fewer mixed-integer inequalities.