Abstract: Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.
Abstract: Since 1992, year where Hugo de Garis has published
the first paper on Evolvable Hardware (EHW), a period of intense
creativity has followed. It has been actively researched, developed
and applied to various problems. Different approaches have been
proposed that created three main classifications: extrinsic, mixtrinsic
and intrinsic EHW. Each of these solutions has a real interest.
Nevertheless, although the extrinsic evolution generates some
excellent results, the intrinsic systems are not so advanced. This
paper suggests 3 possible solutions to implement the run-time
configuration intrinsic EHW system: FPGA-based Run-Time
Configuration system, JBits-based Run-Time Configuration system
and Multi-board functional-level Run-Time Configuration system.
The main characteristic of the proposed architectures is that they are
implemented on Field Programmable Gate Array. A comparison of
proposed solutions demonstrates that multi-board functional-level
run-time configuration is superior in terms of scalability, flexibility
and the implementation easiness.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: In this paper, we evaluate the choice of suitable
quantization characteristics for both the decoder messages and the
received samples in Low Density Parity Check (LDPC) coded
systems using M-QAM (Quadrature Amplitude Modulation)
schemes. The analysis involves the demapper block that provides
initial likelihood values for the decoder, by relating its quantization
strategy of the decoder. A mapping strategy refers to the grouping of
bits within a codeword, where each m-bit group is used to select a
2m-ary signal in accordance with the signal labels. Further we
evaluate the system with mapping strategies like Consecutive-Bit
(CB) and Bit-Reliability (BR). A new demapper version, based on
approximate expressions, is also presented to yield a low complexity
hardware implementation.
Abstract: The evolutionary design of electronic circuits, or
evolvable hardware, is a discipline that allows the user to
automatically obtain the desired circuit design. The circuit
configuration is under the control of evolutionary algorithms. Several
researchers have used evolvable hardware to design electrical
circuits. Every time that one particular algorithm is selected to carry
out the evolution, it is necessary that all its parameters, such as
mutation rate, population size, selection mechanisms etc. are tuned in
order to achieve the best results during the evolution process. This
paper investigates the abilities of evolution strategy to evolve digital
logic circuits based on programmable logic array structures when
different mutation rates are used. Several mutation rates (fixed and
variable) are analyzed and compared with each other to outline the
most appropriate choice to be used during the evolution of
combinational logic circuits. The experimental results outlined in this
paper are important as they could be used by every researcher who
might need to use the evolutionary algorithm to design digital logic
circuits.
Abstract: Truncated multiplier is a good candidate for digital
signal processing (DSP) applications including finite impulse
response (FIR) and discrete cosine transform (DCT). Through
truncated multiplier a significant reduction in Field Programmable
Gate Array (FPGA) resources can be achieved. This paper presents
for the first time a comparison of resource utilization of Spartan-3AN
and Virtex-5 implementation of standard and truncated multipliers
using Very High Speed Integrated Circuit Hardware Description
Language (VHDL). The Virtex-5 FPGA shows significant
improvement as compared to Spartan-3AN FPGA device. The
Virtex-5 FPGA device shows better performance with a percentage
ratio of number of occupied slices for standard to truncated
multipliers is increased from 40% to 73.86% as compared to Spartan-
3AN is decreased from 68.75% to 58.78%. Results show that the
anomaly in Spartan-3AN FPGA device average connection and
maximum pin delay have been efficiently reduced in Virtex-5 FPGA
device.
Abstract: This paper presents the design, fabrication and
evaluation of magneto-rheological damper. Semi-active control
devices have received significant attention in recent years because
they offer the adaptability of active control devices without requiring
the associated large power sources. Magneto-Rheological (MR)
dampers are semi- active control devices that use MR fluids to
produce controllable dampers. They potentially offer highly reliable
operation and can be viewed as fail-safe in that they become passive
dampers if the control hardware malfunction. The advantage of MR
dampers over conventional dampers are that they are simple in
construction, compromise between high frequency isolation and
natural frequency isolation, they offer semi-active control, use very
little power, have very quick response, has few moving parts, have a
relax tolerances and direct interfacing with electronics. Magneto-
Rheological (MR) fluids are Controllable fluids belonging to the
class of active materials that have the unique ability to change
dynamic yield stress when acted upon by an electric or magnetic
field, while maintaining viscosity relatively constant. This property
can be utilized in MR damper where the damping force is changed by
changing the rheological properties of the fluid magnetically. MR
fluids have a dynamic yield stress over Electro-Rheological fluids
(ER) and a broader operational temperature range. The objective of
this papert was to study the application of an MR damper to vibration
control, design the vibration damper using MR fluids, test and
evaluate its performance. In this paper the Rheology and the theory
behind MR fluids and their use on vibration control were studied.
Then a MR vibration damper suitable for vehicle suspension was
designed and fabricated using the MR fluid. The MR damper was
tested using a dynamic test rig and the results were obtained in the
form of force vs velocity and the force vs displacement plots. The
results were encouraging and greatly inspire further research on the
topic.
Abstract: The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Abstract: This paper presents a novel genetic algorithm, termed
the Optimum Individual Monogenetic Algorithm (OIMGA) and
describes its hardware implementation. As the monogenetic strategy
retains only the optimum individual, the memory requirement is
dramatically reduced and no crossover circuitry is needed, thereby
ensuring the requisite silicon area is kept to a minimum.
Consequently, depending on application requirements, OIMGA
allows the investigation of solutions that warrant either larger GA
populations or individuals of greater length. The results given in this
paper demonstrate that both the performance of OIMGA and its
convergence time are superior to those of existing hardware GA
implementations. Local convergence is achieved in OIMGA by
retaining elite individuals, while population diversity is ensured by
continually searching for the best individuals in fresh regions of the
search space.
Abstract: In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Abstract: Multiphasing of dc-dc converters has been known to give technical and economical benefits to low voltage high power buck regulator modules. A major advantage of multiphasing dc-dc converters is the improvement of input and output performances in the buck converter. From this aspect, a potential use would be in renewable energy where power quality plays an important factor. This paper presents the design of a 2-phase 200W boost converter for battery charging application. Analysis of results from hardware measurement of the boost converter demonstrates the benefits of using multiphase. Results from the hardware prototype of the 2-phase boost converter further show the potential extension of multiphase beyond its commonly used low voltage high current domains.
Abstract: This paper presents a new hardware interface using a
microcontroller which processes audio music signals to standard
MIDI data. A technique for processing music signals by extracting
note parameters from music signals is described. An algorithm to
convert the voice samples for real-time processing without complex
calculations is proposed. A high frequency microcontroller as the
main processor is deployed to execute the outlined algorithm. The
MIDI data generated is transmitted using the EIA-232 protocol. The
analyses of data generated show the feasibility of using
microcontrollers for real-time MIDI generation hardware interface.
Abstract: Wireless Sensor Networks (WSN) are emerging
because of the developments in wireless communication technology and miniaturization of the hardware. WSN consists of a large number of low-cost, low-power, multifunctional sensor nodes to monitor physical conditions, such as temperature, sound, vibration, pressure,
motion, etc. The MAC protocol to be used in the sensor networks must be energy efficient and this should aim at conserving the energy during its operation. In this paper, with the focus of analyzing the
MAC protocols used in wireless Adhoc networks to WSN, simulation
experiments were conducted in Global Mobile Simulator
(GloMoSim) software. Number of packets sent by regular nodes, and received by sink node in different deployment strategies, total energy
spent, and the network life time have been chosen as the metric for comparison. From the results of simulation, it is evident that the IEEE 802.11 protocol performs better compared to CSMA and MACA protocols.
Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: Accurate timing alignment and stability is important
to maximize the true counts and minimize the random counts in
positron emission tomography So signals output from detectors must
be centering with the two isotopes to pre-operation and fed signals
into four units of pulse-processing units, each unit can accept up to
eight inputs. The dual source computed tomography consist two units
on the left for 15 detector signals of Cs-137 isotope and two units on
the right are for 15 detectors signals of Co-60 isotope. The gamma
spectrum consisting of either single or multiple photo peaks. This
allows for the use of energy discrimination electronic hardware
associated with the data acquisition system to acquire photon counts
data with a specific energy, even if poor energy resolution detectors
are used. This also helps to avoid counting of the Compton scatter
counts especially if a single discrete gamma photo peak is emitted by
the source as in the case of Cs-137. In this study the polyenergetic
version of the alternating minimization algorithm is applied to the
dual energy gamma computed tomography problem.
Abstract: This paper will present the initial findings of a
research into distributed computer rendering. The goal of the
research is to create a distributed computer system capable of
rendering a 3D model into an MPEG-4 stream. This paper outlines
the initial design, software architecture and hardware setup for the
system.
Distributed computing means designing and implementing
programs that run on two or more interconnected computing systems.
Distributed computing is often used to speed up the rendering of
graphical imaging. Distributed computing systems are used to
generate images for movies, games and simulations.
A topic of interest is the application of distributed computing to
the MPEG-4 standard. During the course of the research, a
distributed system will be created that can render a 3D model into an
MPEG-4 stream. It is expected that applying distributed computing
principals will speed up rendering, thus improving the usefulness and
efficiency of the MPEG-4 standard
Abstract: A new and cost effective robotic device was designed
for remote tele surgery using dual tone multi frequency technology
(DTMF). Tele system with Dual Tone Multiple Frequency has a large
capability in sending and receiving of data in hardware and software.
The robot consists of DC motors for arm movements and it is
controlled manually through a mobile phone through DTMF
Technology. The system enables the surgeon from base station to
send commands through mobile phone to the patient’s robotic system
which includes two robotic arms that translate the input into actual
instrument manipulation. A mobile phone attached to the
microcontroller 8051 which can activate robot through relays. The
Remote robot-assisted tele surgery eliminates geographic constraints
for getting surgical expertise where it is needed and allows an expert
surgeon to teach or proctor the performance of surgical technique by
real-time intervention.
Abstract: Cloud computing is becoming more and more matured over the last few years and consequently the demands for better cloud services is increasing rapidly. One of the research topics to improve cloud services is the desktop computing in virtualized environment. This paper aims at the development of an adaptive virtual desktop service in cloud computing platform based on our previous research on the virtualization technology. We implement cloud virtual desktop and application software streaming technology that make it possible for providing Virtual Desktop as a Service (VDaaS). Given the development of remote desktop virtualization, it allows shifting the user’s desktop from the traditional PC environment to the cloud-enabled environment, which is stored on a remote virtual machine rather than locally. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. Users no longer need to burden the platform maintenances and drastically reduces the overall cost of hardware and software licenses. Moreover, this flexible remote desktop service represents the next significant step to the mobile workplace, and it lets users access their desktop environments from virtually anywhere.
Abstract: In this document, we have proposed a robust
conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS
technology nodes this strategy combines various types of design:
DFR (Design for Reliability), techniques of tolerance: hardware
redundancy TMR (Triple Modular Redundancy) for hard error
tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves
considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.
Abstract: Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.