Tree-on-DAG for Data Aggregation in Sensor Networks

Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting time that is used by nodes to wait for packets from their children before forwarding the packet to the sink. An optimal routing and data aggregation scheme for wireless sensor networks is proposed in this paper. We propose Tree on DAG (ToD), a semistructured approach that uses Dynamic Forwarding on an implicitly constructed structure composed of multiple shortest path trees to support network scalability. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2,000-node Mica2- based network, we conclude that efficient aggregation in large-scale networks can be achieved by our semistructured approach.

Algebraic Quantum Error Correction Codes

A systematic and exhaustive method based on the group structure of a unitary Lie algebra is proposed to generate an enormous number of quantum codes. With respect to the algebraic structure, the orthogonality condition, which is the central rule of generating quantum codes, is proved to be fully equivalent to the distinguishability of the elements in this structure. In addition, four types of quantum codes are classified according to the relation of the codeword operators and some initial quantum state. By linking the unitary Lie algebra with the additive group, the classical correspondences of some of these quantum codes can be rendered.

Preparation of Polylactic Acid Graft Polyvinyl Acetate Compatibilizers for 50/50 Starch/PLLA Blending

Polylactic acid-g-polyvinyl acetate (PLLA-g-PVAc) was used as a compatibilizer for 50/50 starch/PLLA blend. PLLA-g- PVAc with different mol% of PVAc contents were prepared by grafting PVAc onto PLLA backbone via free radical polymerization in solution process. Various conditions such as type and the amount of initiator, monomer concentration, polymerization time and temperature were studied. Results showed that the highest mol% of PVAc grafting (16 mol%) was achieved by conducting graft copolymerization in toluene at 110°C for 10 h using DCP as an initiator. Chemical structure of the PVAc grafted PLLA was confirmed by 1H NMR. Blending of modified starch and PLLA in the presence compatibilizer with different amounts and mol% PVAc was acquired using internal mixer at 160°C for 15 min. Effects of PVAc content and the amount of compatibilizer on mechanical properties of polymer blend were studied. Results revealed that tensile strength and tensile modulus of polymer blend with higher PVAc grafting content compatibilizer showed better properties than that of lower PVAc grafting content compatibilizer. The amount of compatibilizer was found optimized in the range of 0.5-1.0 Wt% depending on the mol% PVAc.

A Discretizing Method for Reliability Computation in Complex Stress-strength Models

This paper proposes, implements and evaluates an original discretization method for continuous random variables, in order to estimate the reliability of systems for which stress and strength are defined as complex functions, and whose reliability is not derivable through analytic techniques. This method is compared to other two discretizing approaches appeared in literature, also through a comparative study involving four engineering applications. The results show that the proposal is very efficient in terms of closeness of the estimates to the true (simulated) reliability. In the study we analyzed both a normal and a non-normal distribution for the random variables: this method is theoretically suitable for each parametric family.

Face Detection using Variance based Haar-Like feature and SVM

This paper proposes a new approach to perform the problem of real-time face detection. The proposed method combines primitive Haar-Like feature and variance value to construct a new feature, so-called Variance based Haar-Like feature. Face in image can be represented with a small quantity of features using this new feature. We used SVM instead of AdaBoost for training and classification. We made a database containing 5,000 face samples and 10,000 non-face samples extracted from real images for learning purposed. The 5,000 face samples contain many images which have many differences of light conditions. And experiments showed that face detection system using Variance based Haar-Like feature and SVM can be much more efficient than face detection system using primitive Haar-Like feature and AdaBoost. We tested our method on two Face databases and one Non-Face database. We have obtained 96.17% of correct detection rate on YaleB face database, which is higher 4.21% than that of using primitive Haar-Like feature and AdaBoost.

A Robust Frequency Offset Estimation Scheme for OFDM System with Cyclic Delay Diversity

Cyclic delay diversity (CDD) is a simple technique to intentionally increase frequency selectivity of channels for orthogonal frequency division multiplexing (OFDM).This paper proposes a residual carrier frequency offset (RFO) estimation scheme for OFDMbased broadcasting system using CDD. In order to improve the RFO estimation, this paper addresses a decision scheme of the amount of cyclic delay and pilot pattern used to estimate the RFO. By computer simulation, the proposed estimator is shown to benefit form propoerly chosen delay parameter and perform robustly.

Development of Heterogeneous Parallel Genetic Simulated Annealing Using Multi-Niche Crowding

In this paper, a new hybrid of genetic algorithm (GA) and simulated annealing (SA), referred to as GSA, is presented. In this algorithm, SA is incorporated into GA to escape from local optima. The concept of hierarchical parallel GA is employed to parallelize GSA for the optimization of multimodal functions. In addition, multi-niche crowding is used to maintain the diversity in the population of the parallel GSA (PGSA). The performance of the proposed algorithms is evaluated against a standard set of multimodal benchmark functions. The multi-niche crowding PGSA and normal PGSA show some remarkable improvement in comparison with the conventional parallel genetic algorithm and the breeder genetic algorithm (BGA).

A Watermarking System Using the Wavelet Technique for Satellite Images

The huge development of new technologies and the apparition of open communication system more and more sophisticated create a new challenge to protect digital content from piracy. Digital watermarking is a recent research axis and a new technique suggested as a solution to these problems. This technique consists in inserting identification information (watermark) into digital data (audio, video, image, databases...) in an invisible and indelible manner and in such a way not to degrade original medium-s quality. Moreover, we must be able to correctly extract the watermark despite the deterioration of the watermarked medium (i.e attacks). In this paper we propose a system for watermarking satellite images. We chose to embed the watermark into frequency domain, precisely the discrete wavelet transform (DWT). We applied our algorithm on satellite images of Tunisian center. The experiments show satisfying results. In addition, our algorithm showed an important resistance facing different attacks, notably the compression (JEPG, JPEG2000), the filtering, the histogram-s manipulation and geometric distortions such as rotation, cropping, scaling.

Enhanced Character Based Algorithm for Small Parsimony

Phylogenetic tree is a graphical representation of the evolutionary relationship among three or more genes or organisms. These trees show relatedness of data sets, species or genes divergence time and nature of their common ancestors. Quality of a phylogenetic tree requires parsimony criterion. Various approaches have been proposed for constructing most parsimonious trees. This paper is concerned about calculating and optimizing the changes of state that are needed called Small Parsimony Algorithms. This paper has proposed enhanced small parsimony algorithm to give better score based on number of evolutionary changes needed to produce the observed sequence changes tree and also give the ancestor of the given input.

Comparison of Current Chinese and Japanese Design Specification for Bridge Pile in Liquefied Ground

Firstly, this study briefly presents the current situation that there exists a vast gap between current Chinese and Japanese seismic design specification for bridge pile foundation in liquefiable and liquefaction-induced lateral spreading ground; The Chinese and Japanese seismic design method and technical detail for bridge pile foundation in liquefying and lateral spreading ground are described and compared systematically and comprehensively, the methods of determining coefficient of subgrade reaction and its reduction factor as well as the computing mode of the applied force on pile foundation due to liquefaction-induced lateral spreading soil in Japanese design specification are especially introduced. Subsequently, the comparison indicates that the content of Chinese seismic design specification for bridge pile foundation in liquefiable and liquefaction-induced lateral spreading ground, just presenting some qualitative items, is too general and lacks systematicness and maneuverability. Finally, some defects of seismic design specification in China are summarized, so the improvement and revision of specification in the field turns out to be imperative for China, some key problems of current Chinese specifications are generalized and the corresponding improvement suggestions are proposed.

Fast Dummy Sequence Insertion Method for PAPR Reduction in WiMAX Systems

In literatures, many researches proposed various methods to reduce PAPR (Peak to Average Power Ratio). Among those, DSI (Dummy Sequence Insertion) is one of the most attractive methods for WiMAX systems because it does not require side information transmitted along with user data. However, the conventional DSI methods find dummy sequence by performing an iterative procedure until achieving PAPR under a desired threshold. This causes a significant delay on finding dummy sequence and also effects to the overall performances in WiMAX systems. In this paper, the new method based on DSI is proposed by finding dummy sequence without the need of iterative procedure. The fast DSI method can reduce PAPR without either delays or required side information. The simulation results confirm that the proposed method is able to carry out PAPR performances as similar to the other methods without any delays. In addition, the simulations of WiMAX system with adaptive modulations are also investigated to realize the use of proposed methods on various fading schemes. The results suggest the WiMAX designers to modify a new Signal to Noise Ratio (SNR) criteria for adaptation.

Real-time Target Tracking Using a Pan and Tilt Platform

In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of the proposed techniques are designed for static cameras environments. When the camera is moving, tracking moving objects become more difficult and many techniques fail to detect and track the desired targets. The problem becomes more complex when we want to track a specific object in real-time using a moving Pan and Tilt camera system to keep the target within the image. This type of tracking is of high importance in surveillance applications. When a target is detected at a certain zone, the possibility of automatically tracking it continuously and keeping it within the image until action is taken is very important for security personnel working in very sensitive sites. This work presents a real-time tracking system permitting the detection and continuous tracking of targets using a Pan and Tilt camera platform. A novel and efficient approach for dealing with occlusions is presented. Also a new intelligent forget factor is introduced in order to take into account target shape variations and avoid learning non desired objects. Tests conducted in outdoor operational scenarios show the efficiency and robustness of the proposed approach.

Dynamic Routing to Multiple Destinations in IP Networks using Hybrid Genetic Algorithm (DRHGA)

In this paper we have proposed a novel dynamic least cost multicast routing protocol using hybrid genetic algorithm for IP networks. Our protocol finds the multicast tree with minimum cost subject to delay, degree, and bandwidth constraints. The proposed protocol has the following features: i. Heuristic local search function has been devised and embedded with normal genetic operation to increase the speed and to get the optimized tree, ii. It is efficient to handle the dynamic situation arises due to either change in the multicast group membership or node / link failure, iii. Two different crossover and mutation probabilities have been used for maintaining the diversity of solution and quick convergence. The simulation results have shown that our proposed protocol generates dynamic multicast tree with lower cost. Results have also shown that the proposed algorithm has better convergence rate, better dynamic request success rate and less execution time than other existing algorithms. Effects of degree and delay constraints have also been analyzed for the multicast tree interns of search success rate.

Numerical and Experimental Investigations on Jet Impingement Cooling

Effective cooling of electronic equipment has emerged as a challenging and constraining problem of the new century. In the present work the feasibility and effectiveness of jet impingement cooling on electronics were investigated numerically and experimentally. Studies have been conducted to see the effect of the geometrical parameters such as jet diameter (D), jet to target spacing (Z) and ratio of jet spacing to jet diameter (Z/D) on the heat transfer characteristics. The values of Reynolds numbers considered are in the range 7000 to 42000. The results obtained from the numerical studies are validated by conducting experiments. From the studies it is found that the optimum value of Z/D ratio is 5. For a given Reynolds number, the Nusselt number increases by about 28% if the diameter of the nozzle is increased from 1mm to 2mm. Correlations are proposed for Nusselt number in terms of Reynolds number and these are valid for air as the cooling medium.

Application of Micro-continuum Approach in the Estimation of Snow Drift Density, Velocity and Mass Transport in Hilly Bound Cold Regions

We estimate snow velocity and snow drift density on hilly terrain under the assumption that the drifting snow mass can be represented using a micro-continuum approach (i.e. using a nonclassical mechanics approach assuming a class of fluids for which basic equations of mass, momentum and energy have been derived). In our model, the theory of coupled stress fluids proposed by Stokes [1] has been employed for the computation of flow parameters. Analyses of bulk drift velocity, drift density, drift transport and mass transport of snow particles have been carried out and computations made, considering various parametric effects. Results are compared with those of classical mechanics (logarithmic wind profile). The results indicate that particle size affects the flow characteristics significantly.

The Study on Service-oriented Encapsulating Methods of Legacy Systems

At present, web Service is the first choice to reuse the legacy system for the implementation of SOA. According to the status of the implementation of SOA and the status of the legacy systems, we propose four encapsulating strategies. Base on the strategies, we proposal the service-oriented encapsulating framework, the legacy system can be encapsulated by the service-oriented encapsulating layer in three aspects, communication protocols, data and program. The reuse rate of the legacy systems can be increased by using this framework

Target Tracking in Sensor Networks: A Distributed Constraint Satisfaction Approach

In distributed resource allocation a set of agents must assign their resources to a set of tasks. This problem arises in many real-world domains such as distributed sensor networks, disaster rescue, hospital scheduling and others. Despite the variety of approaches proposed for distributed resource allocation, a systematic formalization of the problem, explaining the different sources of difficulties, and a formal explanation of the strengths and limitations of key approaches is missing. We take a step towards this goal by using a formalization of distributed resource allocation that represents both dynamic and distributed aspects of the problem. In this paper we present a new idea for target tracking in sensor networks and compare it with previous approaches. The central contribution of the paper is a generalized mapping from distributed resource allocation to DDCSP. This mapping is proven to correctly perform resource allocation problems of specific difficulty. This theoretical result is verified in practice by a simulation on a realworld distributed sensor network.

Antecedent Factors of Ethical Ideologies in Moral Judgment: Evidence from the Mixed Method Study

This research investigates the factors that influence moral judgments when dealing with ethical dilemmas in the organizational context. It also investigates the antecedents of individual ethical ideology (idealism and relativism). A mixed method study, which combines qualitative (field study) and quantitative (survey) approaches, was used in this study. An initial model was developed first, which was then fine-tuned based on field studies. Data were collected from managers in Malaysian large organizations. The results of this study reveal that in-group collectivism culture, power distance culture, parental values, and religiosity were significant as antecedents of ethical ideology. However, direct effects of these variables on moral judgment were not significant. Furthermore, the results of this study confirm the significant effects of ethical ideology on moral judgment. This study provides valuable insight into evaluating the validity of existing theory as proposed in the literature and offers significant practical implications.

Region-Based Segmentation of Generic Video Scenes Indexing

In this work we develop an object extraction method and propose efficient algorithms for object motion characterization. The set of proposed tools serves as a basis for development of objectbased functionalities for manipulation of video content. The estimators by different algorithms are compared in terms of quality and performance and tested on real video sequences. The proposed method will be useful for the latest standards of encoding and description of multimedia content – MPEG4 and MPEG7.

RFU Based Computational Unit Design For Reconfigurable Processors

Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.