Energy Recovery Soft Switching Improved Efficiency Half Bridge Inverter for Electronic Ballast Applications

An improved topology of a voltage-fed quasi-resonant soft switching LCrCdc series-parallel half bridge inverter with a constant-frequency for electronic ballast applications is proposed in this paper. This new topology introduces a low-cost solution to reduce switching losses and circuit rating to achieve high-efficiency ballast. Switching losses effect on ballast efficiency is discussed through experimental point of view. In this discussion, an improved topology in which accomplishes soft switching operation over a wide power regulation range is proposed. The proposed structure uses reverse recovery diode to provide better operation for the ballast system. A symmetrical pulse wide modulation (PWM) control scheme is implemented to regulate a wide range of out-put power. Simulation results are kindly verified with the experimental measurements obtained by ballast-lamp laboratory prototype. Different load conditions are provided in order to clarify the performance of the proposed converter.

Dynamic Data Partition Algorithm for a Parallel H.264 Encoder

The H.264/AVC standard is a highly efficient video codec providing high-quality videos at low bit-rates. As employing advanced techniques, the computational complexity has been increased. The complexity brings about the major problem in the implementation of a real-time encoder and decoder. Parallelism is the one of approaches which can be implemented by multi-core system. We analyze macroblock-level parallelism which ensures the same bit rate with high concurrency of processors. In order to reduce the encoding time, dynamic data partition based on macroblock region is proposed. The data partition has the advantages in load balancing and data communication overhead. Using the data partition, the encoder obtains more than 3.59x speed-up on a four-processor system. This work can be applied to other multimedia processing applications.

A Novel Recursive Multiplierless Algorithm for 2-D DCT

In this paper, a recursive algorithm for the computation of 2-D DCT using Ramanujan Numbers is proposed. With this algorithm, the floating-point multiplication is completely eliminated and hence the multiplierless algorithm can be implemented using shifts and additions only. The orthogonality of the recursive kernel is well maintained through matrix factorization to reduce the computational complexity. The inherent parallel structure yields simpler programming and hardware implementation and provides log 1 2 3 2 N N-N+ additions and N N 2 log 2 shifts which is very much less complex when compared to other recent multiplierless algorithms.

A Real-Time Tracking System Developed for an Interactive Stage Performance

A real-time tracking system was built to track performers on an interactive stage. Using an ordinary, up to date, desktop workstation, the performers- silhouette was segmented from the background and parameterized by calculating the normalized central image moments. In the stage system, the silhouette moments were then sent to a parallel workstation, which used them to generate corresponding 3D virtual geometry and projected the generated graphic back onto the stage.

Local Linear Model Tree (LOLIMOT) Reconfigurable Parallel Hardware

Local Linear Neuro-Fuzzy Models (LLNFM) like other neuro- fuzzy systems are adaptive networks and provide robust learning capabilities and are widely utilized in various applications such as pattern recognition, system identification, image processing and prediction. Local linear model tree (LOLIMOT) is a type of Takagi-Sugeno-Kang neuro fuzzy algorithm which has proven its efficiency compared with other neuro fuzzy networks in learning the nonlinear systems and pattern recognition. In this paper, a dedicated reconfigurable and parallel processing hardware for LOLIMOT algorithm and its applications are presented. This hardware realizes on-chip learning which gives it the capability to work as a standalone device in a system. The synthesis results on FPGA platforms show its potential to improve the speed at least 250 of times faster than software implemented algorithms.

A Parallel Architecture for the Real Time Correction of Stereoscopic Images

In this paper, we will present an architecture for the implementation of a real time stereoscopic images correction's approach. This architecture is parallel and makes use of several memory blocs in which are memorized pre calculated data relating to the cameras used for the acquisition of images. The use of reduced images proves to be essential in the proposed approach; the suggested architecture must so be able to carry out the real time reduction of original images.

AES and ECC Mixed for ZigBee Wireless Sensor Security

In this paper, we argue the security protocols of ZigBee wireless sensor network in MAC layer. AES 128-bit encryption algorithm in CCM* mode is secure transferred data; however, AES-s secret key will be break within nearest future. Efficient public key algorithm, ECC has been mixed with AES to rescue the ZigBee wireless sensor from cipher text and replay attack. Also, the proposed protocol can parallelize the integrity function to increase system performance.

Preparation of Computer Model of the Aircraft for Numerical Aeroelasticity Tests – Flutter

Article presents the geometry and structure reconstruction procedure of the aircraft model for flatter research (based on the I22-IRYDA aircraft). For reconstruction the Reverse Engineering techniques and advanced surface modeling CAD tools are used. Authors discuss all stages of data acquisition process, computation and analysis of measured data. For acquisition the three dimensional structured light scanner was used. In the further sections, details of reconstruction process are present. Geometry reconstruction procedure transform measured input data (points cloud) into the three dimensional parametric computer model (NURBS solid model) which is compatible with CAD systems. Parallel to the geometry of the aircraft, the internal structure (structural model) are extracted and modeled. In last chapter the evaluation of obtained models are discussed.

Radio and Television Supreme Council as a Regulatory Board

In parallel, broadcasting has changed rapidly with the changing of the world at the same area. Broadcasting is also influenced and reshaped in terms of the emergence of new communication technologies. These developments have resulted a lot of economic and social consequences. The most important consequences of these results are those of the powers of the governments to control over the means of communication and control mechanisms related to the descriptions of the new issues. For this purpose, autonomous and independent regulatory bodies have been established by the state. One of these regulatory bodies is the Radio and Television Supreme Council, which to be established in 1994, with the Code no 3984. Today’s Radio and Television Supreme Council which is responsible for the regulation of the radio and television broadcasts all across Turkey has an important and effective position as autonomous and independent regulatory body. The Radio and Television Supreme Council acts as being a remarkable organizer for a sensitive area of radio and television broadcasting on one hand, and the area of democratic, liberal and keep in mind the concept of the public interest by putting certain principles for the functioning of the Board control, in the context of media policy as one of the central organs, on the other hand. In this study, the role of the Radio and Television Supreme Council is examined in accordance with the Code no 3894 in order to control over the communication and control mechanisms as well as the examination of the changes in the duties of the Code No. 6112, dated 2011.

From Experiments to Numerical Modeling: A Tool for Teaching Heat Transfer in Mechanical Engineering

In this work the numerical simulation of transient heat transfer in a cylindrical probe is done. An experiment was conducted introducing a steel cylinder in a heating chamber and registering its surface temperature along the time during one hour. In parallel, a mathematical model was solved for one dimension transient heat transfer in cylindrical coordinates, considering the boundary conditions of the test. The model was solved using finite difference method, because the thermal conductivity in the cylindrical steel bar and the convection heat transfer coefficient used in the model are considered temperature dependant functions, and both conditions prevent the use of the analytical solution. The comparison between theoretical and experimental results showed the average deviation is below 2%. It was concluded that numerical methods are useful in order to solve engineering complex problems. For constant k and h, the experimental methodology used here can be used as a tool for teaching heat transfer in mechanical engineering, using mathematical simplified models with analytical solutions.

Performance Analysis of the Subgroup Method for Collective I/O

As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.

Modeling and Simulations of Complex Low- Dimensional systems: Testing the Efficiency of Parallelization

The deterministic quantum transfer-matrix (QTM) technique and its mathematical background are presented. This important tool in computational physics can be applied to a class of the real physical low-dimensional magnetic systems described by the Heisenberg hamiltonian which includes the macroscopic molecularbased spin chains, small size magnetic clusters embedded in some supramolecules and other interesting compounds. Using QTM, the spin degrees of freedom are accurately taken into account, yielding the thermodynamical functions at finite temperatures. In order to test the application for the susceptibility calculations to run in the parallel environment, the speed-up and efficiency of parallelization are analyzed on our platform SGI Origin 3800 with p = 128 processor units. Using Message Parallel Interface (MPI) system libraries we find the efficiency of the code of 94% for p = 128 that makes our application highly scalable.

Finite Element Analysis and Feasibility of Simple Stochastic Modeling in the Analysis of Fissuring in Grains during Soaking

A finite element analysis was conducted to determine the effect of moisture diffusion and hygroscopic swelling in rice. A parallel simple stochastic modeling was performed to predict the number of grains cracked as a result of moisture absorption and hygroscopic swelling. Rice grains were soaked in thermally (25 oC) controlled water and then tested for compressive stress. The destructive compressive stress tests revealed through compressive stress calculation that the peak force required to cause cracking in grains soaked in water reduced with time as soaking duration was extended. Results of the experiment showed that several grains had their value of the predicted compressive stress below the von Mises stress and were interpreted as grains which become cracked and/or broke during soaking. The technique developed in this experiment will facilitate the approximation of the number of grains which will crack during soaking.

Enabling Automated Deployment for Cluster Computing in Distributed PC Classrooms

The rapid improvement of the microprocessor and network has made it possible for the PC cluster to compete with conventional supercomputers. Lots of high throughput type of applications can be satisfied by using the current desktop PCs, especially for those in PC classrooms, and leave the supercomputers for the demands from large scale high performance parallel computations. This paper presents our development on enabling an automated deployment mechanism for cluster computing to utilize the computing power of PCs such as reside in PC classroom. After well deployment, these PCs can be transformed into a pre-configured cluster computing resource immediately without touching the existing education/training environment installed on these PCs. Thus, the training activities will not be affected by this additional activity to harvest idle computing cycles. The time and manpower required to build and manage a computing platform in geographically distributed PC classrooms also can be reduced by this development.

Limitation Imposed by Polarization-Dependent Loss on a Fiber Optic Communication System

Analytically the effect of polarization dependent loss on a high speed fiber optic communication link has been investigated. PDL and the signal's incoming state of polarization (SOP) have a significant co-relation between them and their various combinations produces different effects on the system behavior which has been inspected. Pauli's spin operator and PDL parameters are combined together to observe the attenuation effect induced by PDL in a link containing multiple PDL elements. It is found that in the presence of PDL the Q-factor and BER at the receiver undergoes fluctuation causing the system to be unstable and results show that it is mainly due to optical-signal-to-parallel-noise ratio (OSNItpar) that these parameters fluctuate. Generally the Q-factor, BER deteriorates as the value of average PDL in the link increases except for depolarized light for which the system parameters improves when PDL increases.

High Level Synthesis of Kahn Process Networks(KPN) for Streaming Applications

Streaming Applications usually run in parallel or in series that incrementally transform a stream of input data. It poses a design challenge to break such an application into distinguishable blocks and then to map them into independent hardware processing elements. For this, there is required a generic controller that automatically maps such a stream of data into independent processing elements without any dependencies and manual considerations. In this paper, Kahn Process Networks (KPN) for such streaming applications is designed and developed that will be mapped on MPSoC. This is designed in such a way that there is a generic Cbased compiler that will take the mapping specifications as an input from the user and then it will automate these design constraints and automatically generate the synthesized RTL optimized code for specified application.

Grid Coordination with Marketmaker Agents

Market based models are frequently used in the resource allocation on the computational grid. However, as the size of the grid grows, it becomes difficult for the customer to negotiate directly with all the providers. Middle agents are introduced to mediate between the providers and customers and facilitate the resource allocation process. The most frequently deployed middle agents are the matchmakers and the brokers. The matchmaking agent finds possible candidate providers who can satisfy the requirements of the consumers, after which the customer directly negotiates with the candidates. The broker agents are mediating the negotiation with the providers in real time. In this paper we present a new type of middle agent, the marketmaker. Its operation is based on two parallel operations - through the investment process the marketmaker is acquiring resources and resource reservations in large quantities, while through the resale process it sells them to the customers. The operation of the marketmaker is based on the fact that through its global view of the grid it can perform a more efficient resource allocation than the one possible in one-to-one negotiations between the customers and providers. We present the operation and algorithms governing the operation of the marketmaker agent, contrasting it with the matchmaker and broker agents. Through a series of simulations in the task oriented domain we compare the operation of the three agents types. We find that the use of marketmaker agent leads to a better performance in the allocation of large tasks and a significant reduction of the messaging overhead.

The Numerical Study of Low Level Jets Formation in South Eastern of Iran

The presence of cold air with the convergent topography of the Lut valley over the valley-s sloping terrain can generate Low Level Jets (LLJ). Moreover, the valley-parallel pressure gradients and northerly LLJ are produced as a result of the large-scale processes. In the numerical study the regional MM5 model was run leading to achieve an appropriate dynamical analysis of flows in the region for summer and winter. The results of this study show the presence of summer synoptical systems cause the formation of north-south pressure gradients in the valley which could be led to the blowing of winds with the velocity more than 14 ms-1 and vulnerable dust and wind storms lasting more than 120 days. Whereas the presence of cold air masses in the region in winter, cause the average speed of LLJs decrease. In this time downslope flows are noticeable in creating the night LLJs.

Towards Self-ware via Swarm-Array Computing

The work reported in this paper proposes Swarm-Array computing, a novel technique inspired by swarm robotics, and built on the foundations of autonomic and parallel computing. The approach aims to apply autonomic computing constructs to parallel computing systems and in effect achieve the self-ware objectives that describe self-managing systems. The constitution of swarm-array computing comprising four constituents, namely the computing system, the problem/task, the swarm and the landscape is considered. Approaches that bind these constituents together are proposed. Space applications employing FPGAs are identified as a potential area for applying swarm-array computing for building reliable systems. The feasibility of a proposed approach is validated on the SeSAm multi-agent simulator and landscapes are generated using the MATLAB toolkit.