CFD Modeling of Mixing Enhancement in a Pitted Micromixer by High Frequency Ultrasound Waves

Use of ultrasound waves is one of the techniques for increasing the mixing and mass transfer in the microdevices. Ultrasound propagation into liquid medium leads to stimulation of the fluid, creates turbulence and so increases the mixing performance. In this study, CFD modeling of two-phase flow in a pitted micromixer equipped with a piezoelectric with frequency of 1.7 MHz has been studied. CFD modeling of micromixer at different velocity of fluid flow in the absence of ultrasound waves and with ultrasound application has been performed. The hydrodynamic of fluid flow and mixing efficiency for using ultrasound has been compared with the layout of no ultrasound application. The result of CFD modeling shows well agreements with the experimental results. The results showed that the flow pattern inside the micromixer in the absence of ultrasound waves is parallel, while when ultrasound has been applied, it is not parallel. In fact, propagation of ultrasound energy into the fluid flow in the studied micromixer changed the hydrodynamic and the forms of the flow pattern and caused to mixing enhancement. In general, from the CFD modeling results, it can be concluded that the applying ultrasound energy into the liquid medium causes an increase in the turbulences and mixing and consequently, improves the mass transfer rate within the micromixer.

Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model

In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.

Effects of Incident Angle and Distance on Visible Light Communication

Visible Light Communication (VLC) provides wireless communication features in illumination systems. One of the key applications is to recognize the user location by indoor illuminators such as light emitting diodes. For localization of individual receivers in these systems, we usually assume that receivers and transmitters are placed in parallel. However, it is difficult to satisfy this assumption because the receivers move randomly in real case. It is necessary to analyze the case when transmitter is not placed perfectly parallel to receiver. It is also important to identify changes on optical gain by the tilted angles and distances of them against the illuminators. In this paper, we simulate optical gain for various cases where the tilt of the receiver and the distance change. Then, we identified changing patterns of optical gains according to tilted angles of a receiver and distance. These results can help many VLC applications understand the extent of the location errors with regard to optical gains of the receivers and identify the root cause.

Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Detecting the Edge of Multiple Images in Parallel

Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel. The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. Message Passing Interface (MPI) and Open Computing Language (OpenCL) is used to achieve task and pixel level parallelism respectively.

Centralized Peak Consumption Smoothing Revisited for Habitat Energy Scheduling

Currently, electricity suppliers must predict the consumption of their customers in order to deduce the power they need to produce. It is then important in a first step to optimize household consumptions to obtain more constant curves by limiting peaks in energy consumption. Here centralized real time scheduling is proposed to manage the equipments starting in parallel. The aim is not to exceed a certain limit while optimizing the power consumption across a habitat. The Raspberry Pi is used as a box; this scheduler interacts with the various sensors in 6LoWPAN. At the scale of a single dwelling, household consumption decreases, particularly at times corresponding to the peaks. However, it would be wiser to consider the use of a residential complex so that the result would be more significant. So the ceiling would no longer be fixed. The scheduling would be done on two scales, on the one hand per dwelling, and secondly, at the level of a residential complex.

Parallel Priority Region Approach to Detect Background

Background detection is essential in video analyses; optimization is often needed in order to achieve real time calculation. Information gathered by dual cameras placed in the front and rear part of an Autonomous Vehicle (AV) is integrated for background detection. In this paper, real time calculation is achieved on the proposed technique by using Priority Regions (PR) and Parallel Processing together where each frame is divided into regions then and each region process is processed in parallel. PR division depends upon driver view limitations. A background detection system is built on the Temporal Difference (TD) and Gaussian Filtering (GF). Temporal Difference and Gaussian Filtering with multi threshold and sigma (weight) value are be based on PR characteristics. The experiment result is prepared on real scene. Comparison of the speed and accuracy with traditional background detection techniques, the effectiveness of PR and parallel processing are also discussed in this paper.

A Rigid Point Set Registration of Remote Sensing Images Based on Genetic Algorithms and Hausdorff Distance

Image registration is the process of establishing point by point correspondence between images obtained from a same scene. This process is very useful in remote sensing, medicine, cartography, computer vision, etc. Then, the task of registration is to place the data into a common reference frame by estimating the transformations between the data sets. In this work, we develop a rigid point registration method based on the application of genetic algorithms and Hausdorff distance. First, we extract the feature points from both images based on the algorithm of global and local curvature corner. After refining the feature points, we use Hausdorff distance as similarity measure between the two data sets and for optimizing the search space we use genetic algorithms to achieve high computation speed for its inertial parallel. The results show the efficiency of this method for registration of satellite images.

High Level Synthesis of Digital Filters Based On Sub-Token Forwarding

High level synthesis (HLS) is a process which generates register-transfer level design for digital systems from behavioral description. There are many HLS algorithms and commercial tools. However, most of these algorithms consider a behavioral description for the system when a single token is presented to the system. This approach does not exploit extra hardware efficiently, especially in the design of digital filters where common operations may exist between successive tokens. In this paper, we modify the behavioral description to process multiple tokens in parallel. However, this approach is unlike the full processing that requires full hardware replication. It exploits the presence of common operations between successive tokens. The performance of the proposed approach is better than sequential processing and approaches that of full parallel processing as the hardware resources are increased.

A New Efficient Scalable BIST Full Adder using Polymorphic Gates

Among various testing methodologies, Built-in Self- Test (BIST) is recognized as a low cost, effective paradigm. Also, full adders are one of the basic building blocks of most arithmetic circuits in all processing units. In this paper, an optimized testable 2- bit full adder as a test building block is proposed. Then, a BIST procedure is introduced to scale up the building block and to generate a self testable n-bit full adders. The target design can achieve 100% fault coverage using insignificant amount of hardware redundancy. Moreover, Overall test time is reduced by utilizing polymorphic gates and also by testing full adder building blocks in parallel.

Low Power and Less Area Architecture for Integer Motion Estimation

Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.

Embedded Systems Energy Consumption Analysis Through Co-modelling and Simulation

This paper presents a new methodology to study power and energy consumption in mechatronic systems early in the development process. This new approach makes use of two modeling languages to represent and simulate embedded control software and electromechanical subsystems in the discrete event and continuous time domain respectively within a single co-model. This co-model enables an accurate representation of power and energy consumption and facilitates the analysis and development of both software and electro-mechanical subsystems in parallel. This makes the engineers aware of energy-wise implications of different design alternatives and enables early trade-off analysis from the beginning of the analysis and design activities.

An Integrated Framework for the Realtime Investigation of State Space Exploration

The objective of this paper is the introduction to a unified optimization framework for research and education. The OPTILIB framework implements different general purpose algorithms for combinatorial optimization and minimum search on standard continuous test functions. The preferences of this library are the straightforward integration of new optimization algorithms and problems as well as the visualization of the optimization process of different methods exploring the search space exclusively or for the real time visualization of different methods in parallel. Further the usage of several implemented methods is presented on the basis of two use cases, where the focus is especially on the algorithm visualization. First it is demonstrated how different methods can be compared conveniently using OPTILIB on the example of different iterative improvement schemes for the TRAVELING SALESMAN PROBLEM. A second study emphasizes how the framework can be used to find global minima in the continuous domain.

Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading

Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.

Hybrid Coding for Animated Polygonal Meshes

A new hybrid coding method for compressing animated polygonal meshes is presented. This paper assumes the simplistic representation of the geometric data: a temporal sequence of polygonal meshes for each discrete frame of the animated sequence. The method utilizes a delta coding and an octree-based method. In this hybrid method, both the octree approach and the delta coding approach are applied to each single frame in the animation sequence in parallel. The approach that generates the smaller encoded file size is chosen to encode the current frame. Given the same quality requirement, the hybrid coding method can achieve much higher compression ratio than the octree-only method or the delta-only method. The hybrid approach can represent 3D animated sequences with higher compression factors while maintaining reasonable quality. It is easy to implement and have a low cost encoding process and a fast decoding process, which make it a better choice for real time application.

An Improvement of PDLZW implementation with a Modified WSC Updating Technique on FPGA

In this paper, an improvement of PDLZW implementation with a new dictionary updating technique is proposed. A unique dictionary is partitioned into hierarchical variable word-width dictionaries. This allows us to search through dictionaries in parallel. Moreover, the barrel shifter is adopted for loading a new input string into the shift register in order to achieve a faster speed. However, the original PDLZW uses a simple FIFO update strategy, which is not efficient. Therefore, a new window based updating technique is implemented to better classify the difference in how often each particular address in the window is referred. The freezing policy is applied to the address most often referred, which would not be updated until all the other addresses in the window have the same priority. This guarantees that the more often referred addresses would not be updated until their time comes. This updating policy leads to an improvement on the compression efficiency of the proposed algorithm while still keep the architecture low complexity and easy to implement.

Managing Iterations in Product Design and Development

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Symbolic Model Checking of Interactions in Sequence Diagrams with Combined Fragments by SMV

In this paper, we proposed a method for detecting consistency violation between state machine diagrams and a sequence diagram defined in UML 2.0 using SMV. We extended a method expressing these diagrams defined in UML 1.0 with boolean formulas so that it can express a sequence diagram with combined fragments introduced in UML 2.0. This extension made it possible to represent three types of combined fragment: alternative, option and parallel. As a result of experiment, we confirmed that the proposed method could detect consistency violation correctly with SMV.

A Heuristic Statistical Model for Lifetime Distribution Analysis of Complicated Systems in the Reliability Centered Maintenance

A heuristic conceptual model for to develop the Reliability Centered Maintenance (RCM), especially in preventive strategy, has been explored during this paper. In most real cases which complicity of system obligates high degree of reliability, this model proposes a more appropriate reliability function between life time distribution based and another which is based on relevant Extreme Value (EV) distribution. A statistical and mathematical approach is used to estimate and verify these two distribution functions. Then best one is chosen just among them, whichever is more reliable. A numeric Industrial case study will be reviewed to represent the concepts of this paper, more clearly.