A Study on the Factors Affecting Student Behavior Intention to Attend Robotics Courses at the Primary and Secondary School Levels

In order to explore the key factors affecting the robot program learning intention of school students, this study takes the technology acceptance model as the theoretical basis and invites 167 students from Jiading District of Shanghai as the research subjects. In the robot course, the model of school students on their learning behavior is constructed. By verifying the causal path relationship between variables, it is concluded that teachers can enhance students’ perceptual usefulness to robotics courses by enhancing subjective norms, entertainment perception, and reducing technical anxiety, such as focusing on the gradual progress of programming and analyzing learner characteristics. Students can improve perceived ease of use by enhancing self-efficacy. At the same time, robot hardware designers can optimize in terms of entertainment and interactivity, which will directly or indirectly increase the learning intention of the robot course. By changing these factors, the learning behavior of primary and secondary school students can be more sustainable.

A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard

This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.

Design and Implementation of DC-DC Converter with Inc-Cond Algorithm

The most important component affecting the efficiency of photovoltaic power systems are solar panels. In other words, efficiency of these systems are significantly affected due to the being low efficiency of solar panel. Thus, solar panels should be operated under maximum power point conditions through a power converter. In this study, design of boost converter has been carried out with maximum power point tracking (MPPT) algorithm which is incremental conductance (Inc-Cond). By using this algorithm, importance of power converter in MPPT hardware design, impacts of MPPT operation have been shown. It is worth noting that initial operation point is the main criteria for determining the MPPT performance. In addition, it is shown that if value of load resistance is lower than critical value, failure operation is realized. For these analyzes, direct duty control is used for simplifying the control.

Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model

This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in VerilogHDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.

A New Floating Point Implementation of Base 2 Logarithm

Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving insights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.

FPGA Hardware Implementation and Evaluation of a Micro-Network Architecture for Multi-Core Systems

This paper presents the design, implementation and evaluation of a micro-network, or Network-on-Chip (NoC), based on a generic pipeline router architecture. The router is designed to efficiently support traffic generated by multimedia applications on embedded multi-core systems. It employs a simplest routing mechanism and implements the round-robin scheduling strategy to resolve output port contentions and minimize latency. A virtual channel flow control is applied to avoid the head-of-line blocking problem and enhance performance in the NoC. The hardware design of the router architecture has been implemented at the register transfer level; its functionality is evaluated in the case of the two dimensional Mesh/Torus topology, and performance results are derived from ModelSim simulator and Xilinx ISE 9.2i synthesis tool. An example of a multi-core image processing system utilizing the NoC structure has been implemented and validated to demonstrate the capability of the proposed micro-network architecture. To reduce complexity of the image compression and decompression architecture, the system use image processing algorithm based on classical discrete cosine transform with an efficient zonal processing approach. The experimental results have confirmed that both the proposed image compression scheme and NoC architecture can achieve a reasonable image quality with lower processing time.

Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA

In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.

Efficient Hardware Architecture of the Direct 2- D Transform for the HEVC Standard

This paper presents the hardware design of a unified architecture to compute the 4x4, 8x8 and 16x16 efficient twodimensional (2-D) transform for the HEVC standard. This architecture is based on fast integer transform algorithms. It is designed only with adders and shifts in order to reduce the hardware cost significantly. The goal is to ensure the maximum circuit reuse during the computing while saving 40% for the number of operations. The architecture is developed using FIFOs to compute the second dimension. The proposed hardware was implemented in VHDL. The VHDL RTL code works at 240 MHZ in an Altera Stratix III FPGA. The number of cycles in this architecture varies from 33 in 4-point- 2D-DCT to 172 when the 16-point-2D-DCT is computed. Results show frequency improvements reaching 96% when compared to an architecture described as the direct transcription of the algorithm.

An Embedded System for Artificial Intelligence Applications

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

The Simulation and Realization of Input-Buffer Scheduling Algorithm in Satellite Switching System

Scheduling algorithm is a key technology in satellite switching system with input-buffer. In this paper, a new scheduling algorithm and its realization are proposed. Based on Crossbar switching fabric, the algorithm adopts serial scheduling strategy and adjusts the output port arbitrating strategy for the better equity of every port. Consequently, it increases the matching probability. The algorithm can greatly reduce the scheduling delay and cell loss rate. The analysis and simulation results by OPNET show that the proposed algorithm has the better performance than others in average delay and cell loss rate, and has the equivalent complexity. On the basis of these results, the hardware realization and simulation based on FPGA are completed, which validate the feasibility of the new scheduling algorithm.

Mutation Rate for Evolvable Hardware

Evolvable hardware (EHW) refers to a selfreconfiguration hardware design, where the configuration is under the control of an evolutionary algorithm (EA). A lot of research has been done in this area several different EA have been introduced. Every time a specific EA is chosen for solving a particular problem, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade a lot of research has been carried out in order to identify the best parameters for the EA-s components for different “test-problems". However different researchers propose different solutions. In this paper the behaviour of mutation rate on (1+λ) evolution strategy (ES) for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies values of the logic cell inputs, the cell type (for example from AND to NOR) and the circuit output. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates used for the evolved circuits. The experimental results found provide the behaviour of the mutation rate to be used during evolution for the design and optimization of logic circuits. The researches on the best mutation rate during the last 40 years are also summarized.

Implementation of Adder-Subtracter Design with VerilogHDL

According to the density of the chips, designers are trying to put so any facilities of computational and storage on single chips. Along with the complexity of computational and storage circuits, the designing, testing and debugging become more and more complex and expensive. So, hardware design will be built by using very high speed hardware description language, which is more efficient and cost effective. This paper will focus on the implementation of 32-bit ALU design based on Verilog hardware description language. Adder and subtracter operate correctly on both unsigned and positive numbers. In ALU, addition takes most of the time if it uses the ripple-carry adder. The general strategy for designing fast adders is to reduce the time required to form carry signals. Adders that use this principle are called carry look- ahead adder. The carry look-ahead adder is to be designed with combination of 4-bit adders. The syntax of Verilog HDL is similar to the C programming language. This paper proposes a unified approach to ALU design in which both simulation and formal verification can co-exist.

Context Generation with Image Based Sensors: An Interdisciplinary Enquiry on Technical and Social Issues and their Implications for System Design

Image data holds a large amount of different context information. However, as of today, these resources remain largely untouched. It is thus the aim of this paper to present a basic technical framework which allows for a quick and easy exploitation of context information from image data especially by non-expert users. Furthermore, the proposed framework is discussed in detail concerning important social and ethical issues which demand special requirements in system design. Finally, a first sensor prototype is presented which meets the identified requirements. Additionally, necessary implications for the software and hardware design of the system are discussed, rendering a sensor system which could be regarded as a good, acceptable and justifiable technical and thereby enabling the extraction of context information from image data.

A Parallel Implementation of the Reverse Converter for the Moduli Set {2n, 2n–1, 2n–1–1}

In this paper, a new reverse converter for the moduli set {2n, 2n–1, 2n–1–1} is presented. We improved a previously introduced conversion algorithm for deriving an efficient hardware design for reverse converter. Hardware architecture of the proposed converter is based on carry-save adders and regular binary adders, without the requirement for modular adders. The presented design is faster than the latest introduced reverse converter for moduli set {2n, 2n–1, 2n–1–1}. Also, it has better performance than the reverse converters for the recently introduced moduli set {2n+1–1, 2n, 2n–1}

Real-Time Control of a Two-Wheeled Inverted Pendulum Mobile Robot

The research on two-wheeled inverted pendulum (TWIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world. This paper describes the hardware design of such a robot. The objective of the design is to develop a TWIP mobile robot as well as MATLAB interfacing configuration to be used as flexible platform comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLAB Real Time Workshop coding, modeling and control scheme will be addressed and discussed. The system is then tested using a wellknown state feedback controller to verify its functionality.

Development of A Meta Description Language for Software/Hardware Cooperative Design and Verification for Model-Checking Systems

Model-checking tools such as Symbolic Model Verifier (SMV) and NuSMV are available for checking hardware designs. These tools can automatically check the formal legitimacy of a design. However, NuSMV is too low level for describing a complete hardware design. It is therefore necessary to translate the system definition, as designed in a language such as Verilog or VHDL, into a language such as NuSMV for validation. In this paper, we present a meta hardware description language, Melasy, that contains a code generator for existing hardware description languages (HDLs) and languages for model checking that solve this problem.

Evaluating Sinusoidal Functions by a Low Complexity Cubic Spline Interpolator with Error Optimization

We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.

Wireless Sensor Networks for Swiftlet Farms Monitoring

This paper provides an in-depth study of Wireless Sensor Network (WSN) application to monitor and control the swiftlet habitat. A set of system design is designed and developed that includes the hardware design of the nodes, Graphical User Interface (GUI) software, sensor network, and interconnectivity for remote data access and management. System architecture is proposed to address the requirements for habitat monitoring. Such applicationdriven design provides and identify important areas of further work in data sampling, communications and networking. For this monitoring system, a sensor node (MTS400), IRIS and Micaz radio transceivers, and a USB interfaced gateway base station of Crossbow (Xbow) Technology WSN are employed. The GUI of this monitoring system is written using a Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) along with Xbow Technology drivers provided by National Instrument. As a result, this monitoring system is capable of collecting data and presents it in both tables and waveform charts for further analysis. This system is also able to send notification message by email provided Internet connectivity is available whenever changes on habitat at remote sites (swiftlet farms) occur. Other functions that have been implemented in this system are the database system for record and management purposes; remote access through the internet using LogMeIn software. Finally, this research draws a conclusion that a WSN for monitoring swiftlet habitat can be effectively used to monitor and manage swiftlet farming industry in Sarawak.

Effect of Temperature on the Performance of Multi-Stage Distillation

The tray/multi-tray distillation process is a topic that has been investigated to great detail over the last decade by many teams such as Jubran et al. [1], Adhikari et al. [2], Mowla et al. [3], Shatat et al. [4] and Fath [5] to name a few. A significant amount of work and effort was spent focusing on modeling and/simulation of specific distillation hardware designs. In this work, we have focused our efforts on investigating and gathering experimental data on several engineering and design variables to quantify their influence on the yield of the multi-tray distillation process. Our goals are to generate experimental performance data to bridge some existing gaps in the design, engineering, optimization and theoretical modeling aspects of the multi-tray distillation process.

Adaptive Distributed Genetic Algorithms and Its VLSI Design

This paper presents a dynamic adaptation scheme for the frequency of inter-deme migration in distributed genetic algorithms (GA), and its VLSI hardware design. Distributed GA, or multi-deme-based GA, uses multiple populations which evolve concurrently. The purpose of dynamic adaptation is to improve convergence performance so as to obtain better solutions. Through simulation experiments, we proved that our scheme achieves better performance than fixed frequency migration schemes.