Automated Particle Picking based on Correlation Peak Shape Analysis and Iterative Classification

Cryo-electron microscopy (CEM) in combination with single particle analysis (SPA) is a widely used technique for elucidating structural details of macromolecular assemblies at closeto- atomic resolutions. However, development of automated software for SPA processing is still vital since thousands to millions of individual particle images need to be processed. Here, we present our workflow for automated particle picking. Our approach integrates peak shape analysis to the classical correlation and an iterative approach to separate macromolecules and background by classification. This particle selection workflow furthermore provides a robust means for SPA with little user interaction. Processing simulated and experimental data assesses performance of the presented tools.

A Wavelet Based Object Watermarking System for Image and Video

Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, objectbased watermarking schemes are needed for copyright protection. This paper proposes a novel blind object watermarking scheme for images and video using the in place lifting shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy image/video compression (e.g. JPEG, JPEG2000 and MPEG-4), scaling, adding noise, filtering, etc.

Optimizing Mobile Agents Migration Based on Decision Tree Learning

Mobile agents are a powerful approach to develop distributed systems since they migrate to hosts on which they have the resources to execute individual tasks. In a dynamic environment like a peer-to-peer network, Agents have to be generated frequently and dispatched to the network. Thus they will certainly consume a certain amount of bandwidth of each link in the network if there are too many agents migration through one or several links at the same time, they will introduce too much transferring overhead to the links eventually, these links will be busy and indirectly block the network traffic, therefore, there is a need of developing routing algorithms that consider about traffic load. In this paper we seek to create cooperation between a probabilistic manner according to the quality measure of the network traffic situation and the agent's migration decision making to the next hop based on decision tree learning algorithms.

Error-Robust Nature of Genome Profiling Applied for Clustering of Species Demonstrated by Computer Simulation

Genome profiling (GP), a genotype based technology, which exploits random PCR and temperature gradient gel electrophoresis, has been successful in identification/classification of organisms. In this technology, spiddos (Species identification dots) and PaSS (Pattern similarity score) were employed for measuring the closeness (or distance) between genomes. Based on the closeness (PaSS), we can buildup phylogenetic trees of the organisms. We noticed that the topology of the tree is rather robust against the experimental fluctuation conveyed by spiddos. This fact was confirmed quantitatively in this study by computer-simulation, providing the limit of the reliability of this highly powerful methodology. As a result, we could demonstrate the effectiveness of the GP approach for identification/classification of organisms.

The Impact Behavior of the Predecessor and Successor on the Transmission of Family Businesses in Tunisia

Nowadays, financial and economic crises are growing more and reach more countries and sectors. These events have, as a result, a considerable impact on the activities of the firms which think unstable and in danger. But besides this heavy uncertainty which weighs on the different firms, the family firm, object of our research, is not only confronted with these external difficulties but also with an internal challenge and of size: that of transmission. Indeed, the transmission of an organization from one generation to another can succeed as it can fail; leaving considerable damage. Our research registers as part of these problems since we tried to understand relation between the behavior of two main actors of the process of succession, predecessor and successor; and the success of transmission.

Design of Robust Fuzzy Logic Power System Stabilizer

Power system stabilizers (PSS) must be capable of providing appropriate stabilization signals over a broad range of operating conditions and disturbance. Traditional PSS rely on robust linear design method in an attempt to cover a wider range of operating condition. Expert or rule-based controllers have also been proposed. Recently fuzzy logic (FL) as a novel robust control design method has shown promising results. The emphasis in fuzzy control design center is around uncertainties in the system parameters & operating conditions. In this paper a novel Robust Fuzzy Logic Power System Stabilizer (RFLPSS) design is proposed The RFLPSS basically utilizes only one measurable Δω signal as input (generator shaft speed). The speed signal is discretized resulting in three inputs to the RFLPSS. There are six rules for the fuzzification and two rules for defuzzification. To provide robustness, additional signal namely, speed are used as inputs to RFLPSS enabling appropriate gain adjustments for the three RFLPSS inputs. Simulation studies show the superior performance of the RFLPSS compared with an optimally designed conventional PSS and discrete mode FLPSS.

Statistical Process Optimization Through Multi-Response Surface Methodology

In recent years, response surface methodology (RSM) has brought many attentions of many quality engineers in different industries. Most of the published literature on robust design methodology is basically concerned with optimization of a single response or quality characteristic which is often most critical to consumers. For most products, however, quality is multidimensional, so it is common to observe multiple responses in an experimental situation. Through this paper interested person will be familiarize with this methodology via surveying of the most cited technical papers. It is believed that the proposed procedure in this study can resolve a complex parameter design problem with more than two responses. It can be applied to those areas where there are large data sets and a number of responses are to be optimized simultaneously. In addition, the proposed procedure is relatively simple and can be implemented easily by using ready-made standard statistical packages.

Towards a Systematic Planning of Standardization Projects in Plant Engineering

In today-s economy plant engineering faces many challenges. For instance the intensifying competition in this business is leading to cost competition and needs for a shorter time-to-market. To remain competitive companies need to make their businesses more profitable by implementing improvement programs such as standardization projects. But they have difficulties to tap their full economic potential for various reasons. One of them is non-holistic planning and implementation of standardization projects. This paper describes a new conceptual framework - the layer-model. The model combines and expands existing proven approaches in order to improve design, implementation and management of standardization projects. Based on a holistic approach it helps to systematically analyze the effects of standardization projects on different business layers and enables companies to better seize the opportunities offered by standardization.

Statistical Distributions of the Lapped Transform Coefficients for Images

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

A Multipurpose Audio Watermarking Algorithm Based on Vector Quantization in DCT Domain

In this paper, a novel multipurpose audio watermarking algorithm is proposed based on Vector Quantization (VQ) in Discrete Cosine Transform (DCT) domain using the codeword labeling and index-bit constrained method. By using this algorithm, it can fulfill the requirements of both the copyright protection and content integrity authentication at the same time for the multimedia artworks. The robust watermark is embedded in the middle frequency coefficients of the DCT transform during the labeled codeword vector quantization procedure. The fragile watermark is embedded into the indices of the high frequency coefficients of the DCT transform by using the constrained index vector quantization method for the purpose of integrity authentication of the original audio signals. Both the robust and the fragile watermarks can be extracted without the original audio signals, and the simulation results show that our algorithm is effective with regard to the transparency, robustness and the authentication requirements

A Method to Improve Test Process in Federal Enterprise Architecture Framework Using ISTQB Framework

Enterprise Architecture (EA) is a framework for description, coordination and alignment of all activities across the organization in order to achieve strategic goals using ICT enablers. A number of EA-compatible frameworks have been developed. We, in this paper, mainly focus on Federal Enterprise Architecture Framework (FEAF) since its reference models are plentiful. Among these models we are interested here in its business reference model (BRM). The test process is one important subject of an EA project which is to somewhat overlooked. This lack of attention may cause drawbacks or even failure of an enterprise architecture project. To address this issue we intend to use International Software Testing Qualification Board (ISTQB) framework and standard test suites to present a method to improve EA testing process. The main challenge is how to communicate between the concepts of EA and ISTQB. In this paper, we propose a method for integrating these concepts.

New Proxy Signatures Preserving Privacy and as Secure as ElGamal Signatures

Digital signature is a useful primitive to attain the integrity and authenticity in various wire or wireless communications. Proxy signature is one type of the digital signatures. It helps the proxy signer to sign messages on behalf of the original signer. It is very useful when the original signer (e.g. the president of a company) is not available to sign a specific document. If the original signer can not forge valid proxy signatures through impersonating the proxy signer, it will be robust in a virtual environment; thus the original signer can not shift any illegal action initiated by herself to the proxy signer. In this paper, we propose a new proxy signature scheme. The new scheme can prevent the original signer from impersonating the proxy signer to sign messages. The proposed scheme is based on the regular ElGamal signature. In addition, the fair privacy of the proxy signer is maintained. That means, the privacy of the proxy signer is preserved; and the privacy can be revealed when it is necessary.

An Empirical Analysis of the Influence of Application Experience on Working Methods of Process Modelers

In view of growing competition in the service sector, services are as much in need of modeling, analysis and improvement as business or working processes. Graphical process models are important means to capture process-related know-how for an effective management of the service process. In this contribution, a human performance analysis of process model development paying special attention to model development time and the working method was conducted. It was found that modelers with higher application experience need significantly less time for mental activities than modelers with lower application experience, spend more time on labeling graphical elements, and achieved higher process model quality in terms of activity label quality.

An Image Encryption Method with Magnitude and Phase Manipulation using Carrier Images

We describe an effective method for image encryption which employs magnitude and phase manipulation using carrier images. Although it involves traditional methods like magnitude and phase encryptions, the novelty of this work lies in deploying the concept of carrier images for encryption purpose. To this end, a carrier image is randomly chosen from a set of stored images. One dimensional (1-D) discrete Fourier transform (DFT) is then carried out on the original image to be encrypted along with the carrier image. Row wise spectral addition and scaling is performed between the magnitude spectra of the original and carrier images by randomly selecting the rows. Similarly, row wise phase addition and scaling is performed between the original and carrier images phase spectra by randomly selecting the rows. The encrypted image obtained by these two operations is further subjected to one more level of magnitude and phase manipulation using another randomly chosen carrier image by 1-D DFT along the columns. The resulting encrypted image is found to be fully distorted, resulting in increasing the robustness of the proposed work. Further, applying the reverse process at the receiver, the decrypted image is found to be distortionless.

A New Fast Skin Color Detection Technique

Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.

Toward an Open Network Business Approach

The aim of this paper is to propose a dynamic integrated approach, based on modularity concept and on the business ecosystem approach, that exploit different eBusiness services for SMEs under an open business network platform. The adoption of this approach enables firms to collaborate locally for delivering the best product/service to the customers as well as globally by accessing international markets, interrelate directly with the customers, create relationships and collaborate with worldwide actors. The paper will be structured as following: We will start by offering an overview of the state of the art of eBusiness platforms among SME of food and tourism firms and then we discuss the main drawbacks that characterize them. The digital business ecosystem approach and the modularity concept will be described as the theoretical ground in which our proposed integrated model is rooted. Finally, the proposed model along with a discussion of the main value creation potentialities it might create for SMEs will be presented.

SOA and BPM Partnership: A Paradigm for Dynamic and Flexible Process and I.T. Management

Business Process Management (BPM) helps in optimizing the business processes inside an enterprise. But BPM architecture does not provide any help for extending the enterprise. Modern business environments and rapidly changing technologies are asking for brisk changes in the business processes. Service Oriented Architecture (SOA) can help in enabling the success of enterprise-wide BPM. SOA supports agility in software development that is directly related to achieve loose coupling of interacting software agents. Agility is a premium concern of the current software designing architectures. Together, BPM and SOA provide a perfect combination for enterprise computing. SOA provides the capabilities for services to be combined together and to support and create an agile, flexible enterprise. But there are still many questions to answer; BPM is better or SOA? and what is the future track of BPM and SOA? This paper tries to answer some of these important questions.

Hybrid MAC Protocols Characteristics in Multi-hops Wireless Sensor Networks

In the current decade, wireless sensor networks are emerging as a peculiar multi-disciplinary research area. By this way, energy efficiency is one of the fundamental research themes in the design of Medium Access Control (MAC) protocols for wireless sensor networks. Thus, in order to optimize the energy consumption in these networks, a variety of MAC protocols are available in the literature. These schemes were commonly evaluated under simple network density and a few results are published on their robustness in realistic network-s size. We, in this paper, provide an analytical study aiming to highlight the energy waste sources in wireless sensor networks. Then, we experiment three energy efficient hybrid CSMA/CA based MAC protocols optimized for wireless sensor networks: Sensor-MAC (SMAC), Time-out MAC (TMAC) and Traffic aware Energy Efficient MAC (TEEM). We investigate these protocols with different network densities in order to discuss the end-to-end performances of these schemes (i.e. in terms of energy efficiency, delay and throughput). Through Network Simulator (NS- 2) implementations, we explore the behaviors of these protocols with respect to the network density. In fact, this study may help the multihops sensor networks designers to design or select the MAC layer which matches better their applications aims.

Influence of Type of Burner on NOx Emission Characteristics from Combustion of Palm Methyl Ester

Palm methyl ester (PME) is one of the alternative biomass fuels to liquid fossil fuels. To investigate the combustion characteristics of PME as an alternative fuel for gas turbines, combustion experiments using two types of burners under atmospheric pressure were performed. One of the burners has a configuration making strong non-premixed flame, whereas the other has a configuration promoting prevaporization of fuel droplets. The results show that the NOx emissions can be reduced by employing the latter burner without accumulation of soot when PME is used as a fuel. A burner configuration promoting prevaporzation of fuel droplets is recommended for PME.

MinRoot and CMesh: Interconnection Architectures for Network-on-Chip Systems

The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.