Abstract: In Image processing the Image compression can improve
the performance of the digital systems by reducing the cost and
time in image storage and transmission without significant reduction
of the Image quality. This paper describes hardware architecture of
low complexity Discrete Cosine Transform (DCT) architecture for
image compression[6]. In this DCT architecture, common computations
are identified and shared to remove redundant computations
in DCT matrix operation. Vector processing is a method used for
implementation of DCT. This reduction in computational complexity
of 2D DCT reduces power consumption. The 2D DCT is performed
on 8x8 matrix using two 1-Dimensional Discrete cosine transform
blocks and a transposition memory [7]. Inverse discrete cosine
transform (IDCT) is performed to obtain the image matrix and
reconstruct the original image. The proposed image compression
algorithm is comprehended using MATLAB code. The VLSI design
of the architecture is implemented Using Verilog HDL. The proposed
hardware architecture for image compression employing DCT was
synthesized using RTL complier and it was mapped using 180nm
standard cells. . The Simulation is done using Modelsim. The
simulation results from MATLAB and Verilog HDL are compared.
Detailed analysis for power and area was done using RTL compiler
from CADENCE. Power consumption of DCT core is reduced to
1.027mW with minimum area[1].
Abstract: The design of technological procedures for
manufacturing certain products demands the definition and
optimization of technological process parameters. Their
determination depends on the model of the process itself and its
complexity. Certain processes do not have an adequate mathematical
model, thus they are modeled using heuristic methods. First part of
this paper presents a state of the art of using soft computing
techniques in manufacturing processes from the perspective of
applicability in modern CAx systems. Methods of artificial
intelligence which can be used for this purpose are analyzed. The
second part of this paper shows some of the developed models of
certain processes, as well as their applicability in the actual
calculation of parameters of some technological processes within the
design system from the viewpoint of productivity.
Abstract: The paper outlines the drivers behind the movement
from products to solutions in the Hi-Tech Business-to-Business
markets. The paper lists out the challenges in enabling the
transformation from products to solutions and also attempts to explore
strategic and operational recommendations based on the authors-
factual experiences with Japanese Hi-tech manufacturing
organizations. Organizations in the Hi-Tech Business-to-Business
markets are increasingly being compelled to move to a solutions model
from the conventional products model. Despite the added complexity
of solutions, successful technology commercialization can be achieved
by making prudent choices in defining a relevant solutions model, by
backing the solution model through appropriate organizational design,
and by overhauling the new product development process and
supporting infrastructure.
Abstract: The complexity of lignocellulosic biomass requires
a pretreatment step to improve the yield of fermentable sugars. The
efficient pretreatment of corn cobs using microwave and potassium
hydroxide and enzymatic hydrolysis was investigated. The
objective of this work was to characterize the optimal condition of
pretreatment of corn cobs using microwave and potassium
hydroxide enhance enzymatic hydrolysis. Corn cobs were
submerged in different potassium hydroxide concentration at varies
temperature and resident time. The pretreated corn cobs were
hydrolyzed to produce the reducing sugar for analysis. The
morphology and microstructure of samples were investigated by
Thermal gravimetric analysis (TGA, scanning electron microscope
(SEM), X-ray diffraction (XRD). The results showed that lignin
and hemicellulose were removed by microwave/potassium
hydroxide pretreatment. The crystallinity of the pretreated corn
cobs was higher than the untreated. This method was compared
with autoclave and conventional heating method. The results
indicated that microwave-alkali treatment was an efficient way to
improve the enzymatic hydrolysis rate by increasing its
accessibility hydrolysis enzymes.
Abstract: In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Abstract: Cryptography, Image watermarking and E-banking are
filled with apparent oxymora and paradoxes. Random sequences are
used as keys to encrypt information to be used as watermark during
embedding the watermark and also to extract the watermark during
detection. Also, the keys are very much utilized for 24x7x365
banking operations. Therefore a deterministic random sequence is
very much useful for online applications. In order to obtain the same
random sequence, we need to supply the same seed to the generator.
Many researchers have used Deterministic Random Number
Generators (DRNGs) for cryptographic applications and Pseudo
Noise Random sequences (PNs) for watermarking. Even though,
there are some weaknesses in PN due to attacks, the research
community used it mostly in digital watermarking. On the other hand,
DRNGs have not been widely used in online watermarking due to its
computational complexity and non-robustness. Therefore, we have
invented a new design of generating DRNG using Pi-series to make it
useful for online Cryptographic, Digital watermarking and Banking
applications.
Abstract: Genetic Algorithms (GAs) are direct searching
methods which require little information from design space. This
characteristic beside robustness of these algorithms makes them to be
very popular in recent decades. On the other hand, while this method
is employed, there is no guarantee to achieve optimum results. This
obliged designer to run such algorithms more than one time to
achieve more reliable results. There are many attempts to modify the
algorithms to make them more efficient. In this paper, by application
of fractal dimension (particularly, Box Counting Method), the
complexity of design space are established for determination of
mutation and crossover probabilities (Pm and Pc). This methodology
is followed by a numerical example for more clarification. It is
concluded that this modification will improve efficiency of GAs and
make them to bring about more reliable results especially for design
space with higher fractal dimensions.
Abstract: On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.
Abstract: In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.
Abstract: Online learning with Intelligent Tutoring System (ITS) is becoming very popular where the system models the student-s learning behavior and presents to the student the learning material (content, questions-answers, assignments) accordingly. In today-s distributed computing environment, the tutoring system can take advantage of networking to utilize the model for a student for students from other similar groups. In the present paper we present a methodology where using Case Based Reasoning (CBR), ITS provides student modeling for online learning in a distributed environment with the help of agents. The paper describes the approach, the architecture, and the agent characteristics for such system. This concept can be deployed to develop ITS where the tutor can author and the students can learn locally whereas the ITS can model the students- learning globally in a distributed environment. The advantage of such an approach is that both the learning material (domain knowledge) and student model can be globally distributed thus enhancing the efficiency of ITS with reducing the bandwidth requirement and complexity of the system.
Abstract: In this paper, we introduce a novel platform
encryption method, which modify its keys and random number
generators step by step during encryption algorithms. According to
complexity of the proposed algorithm, it was safer than any other
method.
Abstract: We present a novel scheme to recognize isolated speech
signals using certain statistical parameters derived from those signals.
The determination of the statistical estimates is based on extracted
signal information rather than the original signal information in
order to reduce the computational complexity. Subtle details of
these estimates, after extracting the speech signal from ambience
noise, are first exploited to segregate the polysyllabic words from
the monosyllabic ones. Precise recognition of each distinct word is
then carried out by analyzing the histogram, obtained from these
information.
Abstract: This paper invites to dialogue and reflections on
innovation and entrepreneurship by presenting concepts of innovation
leading to the introduction of a complex theoretical framework;
Cooperative Innovation (CO-IN). CO-IN is a didactic model
enhancing and scaffolding processes of cooperation creating
innovation drawing on a Scandinavian tradition.
CO-IN is based on a cross-sectorial and multidisciplinary
approach. We introduce the concept of complementarity to help
capture the validity of diversity and we suggest the concept of “the
space in between" to understand the creation of identity as a
collective mind. We see dialogue and the use of multi modal
techniques as essential tools for conceptualizations giving possibility
for clarification of the complexity and diversity leading to decision
making based on knowledge as commons.
We introduce the didactic design and present our empirical
findings from an innovation workshop in Argentina. In a final
paragraph we reflect on the design as a support of the development of
common ground, collective mind and collective action and the
creation of knowledge as commons to facilitate innovation and
entrepreneurship.
Abstract: The objective of the paper is twofold. First, to develop a
formal framework for planning for mobile agents. A logical language
based on a temporal logic is proposed that can express a type of
tasks which often arise in network management. Second, to design a
planning algorithm for such tasks. The aim of this paper is to study
the importance of finding plans for mobile agents. Although there
has been a lot of research in mobile agents, not much work has been
done to incorporate planning ideas for such agents. This paper makes
an attempt in this direction. A theoretical study of finding plans for
mobile agents is undertaken. A planning algorithm (based on the
paradigm of mobile computing) is proposed and its space, time, and
communication complexity is analyzed. The algorithm is illustrated
by working out an example in detail.
Abstract: In this paper, with the purpose of further reducing the
complexity of the system, while keeping its temporal and spatial
focusing performance, we investigate the possibility of using optimal
one bit time reversal (TR) system for impulse radio ultra wideband
multi-user wireless communications. The results show that, by optimally
selecting the number of used taps in the pre-filter the optimal
one bit TR system can outperform the full one bit TR system. In
some cases, the temporal and spatial focusing performance of the
optimal one bit TR system appears to be compatible with that of the
original TR system. This is a significant result as the overhead cost
is much lower than it is required in the original TR system.
Abstract: Enterprise Wide Information Systems (EWIS)
implementation involves the entire business and will require changes
throughout the firm. Because of the scope, complexity and
continuous nature of ERP, the project-based approach to managing
the implementation process resulted in failure rates of between 60%
and 80%. In recent years ERP systems have received much attention.
The organizational relevance and risk of ERP projects make it
important for organizations to focus on ways to make ERP
implementation successful. Once these systems are in place,
however, their performance depends on the identified macro
variables viz. 'Business Process', 'Decision Making' and 'Individual
/ Group working'. The questionnaire was designed and administered.
The responses from 92 organizations were compiled. The
relationship of these variables with EWIS performance is analyzed
using inferential statistical measurements. The study helps to
understand the performance of model presented. The study suggested
in keeping away from the calamities and thereby giving the
necessary competitive edge. Whenever some discrepancy is
identified during the process of performance appraisal care has to be
taken to draft necessary preventive measures. If all these measures
are taken care off then the EWIS performance will definitely deliver
the results.
Abstract: This paper presents a VLSI design approach of a highspeed
and real-time 2-D Discrete Wavelet Transform computing. The
proposed architecture, based on new and fast convolution approach,
reduces the hardware complexity in addition to reduce the critical
path to the multiplier delay. Furthermore, an advanced twodimensional
(2-D) discrete wavelet transform (DWT)
implementation, with an efficient memory area, is designed to
produce one output in every clock cycle. As a result, a very highspeed
is attained. The system is verified, using JPEG2000
coefficients filters, on Xilinx Virtex-II Field Programmable Gate
Array (FPGA) device without accessing any external memory. The
resulting computing rate is up to 270 M samples/s and the (9,7) 2-D
wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out
memory) with 256×256 image size. In this way, the developed design
requests reduced memory and provide very high-speed processing as
well as high PSNR quality.
Abstract: There are two paradigms proposed to provide QoS for Internet applications: Integrated service (IntServ) and Differentiated service (DiffServ).Intserv is not appropriate for large network like Internet. Because is very complex. Therefore, to reduce the complexity of QoS management, DiffServ was introduced to provide QoS within a domain using aggregation of flow and per- class service. In theses networks QoS between classes is constant and it allows low priority traffic to be effected from high priority traffic, which is not suitable. In this paper, we proposed a fuzzy controller, which reduced the effect of low priority class on higher priority ones. Our simulations shows that, our approach reduces the latency dependency of low priority class on higher priority ones, in an effective manner.
Abstract: According to the density of the chips, designers are
trying to put so any facilities of computational and storage on single
chips. Along with the complexity of computational and storage
circuits, the designing, testing and debugging become more and more
complex and expensive. So, hardware design will be built by using
very high speed hardware description language, which is more
efficient and cost effective. This paper will focus on the
implementation of 32-bit ALU design based on Verilog hardware
description language. Adder and subtracter operate correctly on both
unsigned and positive numbers. In ALU, addition takes most of the
time if it uses the ripple-carry adder. The general strategy for
designing fast adders is to reduce the time required to form carry
signals. Adders that use this principle are called carry look- ahead
adder. The carry look-ahead adder is to be designed with combination
of 4-bit adders. The syntax of Verilog HDL is similar to the C
programming language. This paper proposes a unified approach to
ALU design in which both simulation and formal verification can
co-exist.
Abstract: Real-time embedded systems should benefit from
component-based software engineering to handle complexity and
deal with dependability. In these systems, applications should not
only be logically correct but also behave within time windows.
However, in the current component based software engineering
approaches, a few of component models handles time properties in
a manner that allows efficient analysis and checking at the
architectural level. In this paper, we present a meta-model for
component-based software description that integrates timing
issues. To achieve a complete functional model of software
components, our meta-model focuses on four functional aspects:
interface, static behavior, dynamic behavior, and interaction
protocol. With each aspect we have explicitly associated a time
model. Such a time model can be used to check a component-s
design against certain properties and to compute the timing
properties of component assemblies.