Abstract: Block replacement algorithms to increase hit ratio
have been extensively used in cache memory management. Among
basic replacement schemes, LRU and FIFO have been shown to be
effective replacement algorithms in terms of hit rates. In this paper,
we introduce a flexible stack-based circuit which can be employed in
hardware implementation of both LRU and FIFO policies. We
propose a simple and efficient architecture such that stack-based
replacement algorithms can be implemented without the drawbacks
of the traditional architectures. The stack is modular and hence, a set
of stack rows can be cascaded depending on the number of blocks in
each cache set. Our circuit can be implemented in conjunction with
the cache controller and static/dynamic memories to form a cache
system. Experimental results exhibit that our proposed circuit
provides an average value of 26% improvement in storage bits and its
maximum operating frequency is increased by a factor of two
Abstract: Heterogeneity has to be taken into account when
integrating a set of existing information sources into a distributed
information system that are nowadays often based on Service-
Oriented Architectures (SOA). This is also particularly applicable to
distributed services such as event monitoring, which are useful in the
context of Event Driven Architectures (EDA) and Complex Event
Processing (CEP). Web services deal with this heterogeneity at a
technical level, also providing little support for event processing. Our
central thesis is that such a fully generic solution cannot provide
complete support for event monitoring; instead, source specific
semantics such as certain event types or support for certain event
monitoring techniques have to be taken into account. Our core result
is the design of a configurable event monitoring (Web) service that
allows us to trade genericity for the exploitation of source specific
characteristics. It thus delivers results for the areas of SOA, Web
services, CEP and EDA.
Abstract: Parallel programming models exist as an abstraction
of hardware and memory architectures. There are several parallel
programming models in commonly use; they are shared memory
model, thread model, message passing model, data parallel model,
hybrid model, Flynn-s models, embarrassingly parallel computations
model, pipelined computations model. These models are not specific
to a particular type of machine or memory architecture. This paper
expresses the model program for concurrent approach to data parallel
model through java programming.
Abstract: This paper study the segmented split capacitor
Digital-to-Analog Converter (DAC) implemented in a differentialtype
12-bit Successive Approximation Analog-to-Digital Converter
(SA-ADC). The series capacitance split array method employed as it
reduced the total area of the capacitors required for high resolution
DACs. A 12-bit regular binary array structure requires 2049 unit
capacitors (Cs) while the split array needs 127 unit Cs. These results
in the reduction of the total capacitance and power consumption of
the series split array architectures as to regular binary-weighted
structures. The paper will show the 12-bit DAC series split capacitor
with 4-bit thermometer coded DAC architectures as well as the
simulation and measured results.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: With the advent of DSL services, high data rates are now available over phone lines, yet higher rates are in demand. In this paper, we optimize the transmit filters that can be used over wireline channels. Results showing the bit error rates when optimized filters are used, and with a decision feedback equalizer (DFE) employed in the receiver, are given. We then show that significantly higher throughput can be achieved by modeling the channel as a multiple input multiple output (MIMO) channel. A receiver that employs a MIMO-DFE that deals jointly with several users is proposed and shown to provide significant improvement over the conventional DFE.
Abstract: Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
Abstract: The purpose of this research was to design costume by the inspiration from the configurations, colors and decorations of Thai Royal Barges. The researcher investigated the bibliographies and the important of the Thai Royal Water-Course Procession, configurations and decoration techniques of four Royal Barges history. Furthermore, the researcher combined the contemporary architecture which became part of the four costumes with four patterns in this research. The four costumes designed by applied the physical configuration of the Royal Barge with the fold techniques which create the geometry pattern that are part of the Royal Barge-s decoration and contemporary architecture. Therefore, the researcher united each identity color of the barges with each costume composed with the original patterns by adjusted new layout and resized. Lastly, the new attractive patterns appeared. Nevertheless, the beauty of Thai traditional still remain by using Thai painting figure with black and white color which are the prevalent colors for the contemporary architectures.
Abstract: Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.
Abstract: In this paper we analyze the core issues affecting
software architecture in enterprise projects where a large number of
people at different backgrounds are involved and complex business,
management and technical problems exist. We first give general
features of typical enterprise projects and then present foundations of
software architectures. The detailed analysis of core issues affecting
software architecture in software development phases is given. We
focus on three main areas in each development phase: people,
process, and management related issues, structural (product) issues,
and technology related issues. After we point out core issues and
problems in these main areas, we give recommendations for
designing good architecture. We observed these core issues and the
importance of following the best software development practices and
also developed some novel practices in many big enterprise
commercial and military projects in about 10 years of experience.
Abstract: Reliability is one of the most important quality attributes of software. Based on the approach of Reussner and the approach of Cheung, we proposed the reliability prediction model of component-based software architectures. Also, the value of the model is shown through the experimental evaluation on a web server system.
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
Abstract: As chip manufacturing technology is suddenly on the
threshold of major evaluation, which shrinks chip in size and
performance, LFSR (Linear Feedback Shift Register) is implemented
in layout level which develops the low power consumption chip,
using recent CMOS, sub-micrometer layout tools. Thus LFSR
counter can be a new trend setter in cryptography and is also
beneficial as compared to GRAY & BINARY counter and variety of
other applications.
This paper compares 3 architectures in terms of the hardware
implementation, CMOS layout and power consumption, using
Microwind CMOS layout tool. Thus it provides solution to a low
power architecture implementation of LFSR in CMOS VLSI.
Abstract: The project describes the modeling of various
architectures mechatronics specifically morphologies of robots in an educational environment. Each structure developed by students of
pre-school, primary and secondary was created using the concept of
reverse engineering in a constructivist environment, to later be integrated in educational software that promotes the teaching of
educational Robotics in a virtual and economic environment.
Abstract: Network-Centric Air Defense Missile Systems
(NCADMS) represents the superior development of the air defense
missile systems and has been regarded as one of the major research
issues in military domain at present. Due to lack of knowledge and
experience on NCADMS, modeling and simulation becomes an effective
approach to perform operational analysis, compared with
those equation based ones. However, the complex dynamic interactions
among entities and flexible architectures of NCADMS put forward
new requirements and challenges to the simulation framework
and models. ABS (Agent-Based Simulations) explicitly addresses
modeling behaviors of heterogeneous individuals. Agents have capability
to sense and understand things, make decisions, and act on the
environment. They can also cooperate with others dynamically to
perform the tasks assigned to them. ABS proves an effective approach
to explore the new operational characteristics emerging in
NCADMS. In this paper, based on the analysis of network-centric
architecture and new cooperative engagement strategies for
NCADMS, an agent-based simulation framework by expanding the
simulation framework in the so-called System Effectiveness Analysis
Simulation (SEAS) was designed. The simulation framework specifies
components, relationships and interactions between them, the
structure and behavior rules of an agent in NCADMS. Based on scenario
simulations, information and decision superiority and operational
advantages in NCADMS were analyzed; meanwhile some
suggestions were provided for its future development.
Abstract: We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.
Abstract: Embedded systems need to respect stringent real
time constraints. Various hardware components included in such
systems such as cache memories exhibit variability and therefore
affect execution time. Indeed, a cache memory access from an
embedded microprocessor might result in a cache hit where the
data is available or a cache miss and the data need to be fetched
with an additional delay from an external memory. It is therefore
highly desirable to predict future memory accesses during
execution in order to appropriately prefetch data without incurring
delays. In this paper, we evaluate the potential of several artificial
neural networks for the prediction of instruction memory
addresses. Neural network have the potential to tackle the nonlinear
behavior observed in memory accesses during program
execution and their demonstrated numerous hardware
implementation emphasize this choice over traditional forecasting
techniques for their inclusion in embedded systems. However,
embedded applications execute millions of instructions and
therefore millions of addresses to be predicted. This very
challenging problem of neural network based prediction of large
time series is approached in this paper by evaluating various neural
network architectures based on the recurrent neural network
paradigm with pre-processing based on the Self Organizing Map
(SOM) classification technique.
Abstract: This paper discusses the applicability of the Data
Distribution Service (DDS) for the development of automated and modular manufacturing systems which require a flexible and robust
communication infrastructure. DDS is an emergent standard for datacentric publish/subscribe middleware systems that provides an
infrastructure for platform-independent many-to-many
communication. It particularly addresses the needs of real-time systems that require deterministic data transfer, have low memory
footprints and high robustness requirements. After an overview of the
standard, several aspects of DDS are related to current challenges for the development of modern manufacturing systems with distributed architectures. Finally, an example application is presented based on a modular active fixturing system to illustrate the described aspects.
Abstract: We demonstrate a 40Gbps downstream PON
transmission based on PM-QPSK modulation using commercial DFB
lasers without optical amplifier in the ODN, obtaining 40dB power
budget. We discuss this solution within NG-PON2 architectures.
Abstract: European Union candidate status provides a
strong motivation for decision-making in the candidate
countries in shaping the regional development policy where
there is an envisioned transfer of power from center to the
periphery. The process of Europeanization anticipates the
candidate countries configure their regional institutional
templates in the context of the requirements of the European
Union policies and introduces new instruments of incentive
framework of enlargement to be employed in regional
development schemes. It is observed that the contribution of
the local actors to the decision making in the design of the
allocation architectures enhances the efficiency of the funds
and increases the positive effects of the projects funded under
the regional development objectives. This study aims at
exploring the performances of the three regional development
grant schemes in Turkey, established and allocated under the
pre-accession process with a special emphasis given to the
roles of the national and local actors in decision-making for
regional development. Efficiency analyses have been
conducted using the DEA methodology which has proved to
be a superior method in comparative efficiency and
benchmarking measurements. The findings of this study as
parallel to similar international studies, provides that the
participation of the local actors to the decision-making in
funding contributes both to the quality and the efficiency of
the projects funded under the EU schemes.