A Watermarking Scheme for MP3 Audio Files

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Morphing Human Faces: Automatic Control Points Selection and Color Transition

In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.

Performance Trade-Off of File System between Overwriting and Dynamic Relocation on a Solid State Drive

Most file systems overwrite modified file data and metadata in their original locations, while the Log-structured File System (LFS) dynamically relocates them to other locations. We design and implement the Evergreen file system that can select between overwriting or relocation for each block of a file or metadata. Therefore, the Evergreen file system can achieve superior write performance by sequentializing write requests (similar to LFS-style relocation) when space utilization is low and overwriting when utilization is high. Another challenging issue is identifying performance benefits of LFS-style relocation over overwriting on a newly introduced SSD (Solid State Drive) which has only Flash-memory chips and control circuits without mechanical parts. Our experimental results measured on a SSD show that relocation outperforms overwriting when space utilization is below 80% and vice versa.

Estimation of Buffer Size of Internet Gateway Server via G/M/1 Queuing Model

How to efficiently assign system resource to route the Client demand by Gateway servers is a tricky predicament. In this paper, we tender an enhanced proposal for autonomous recital of Gateway servers under highly vibrant traffic loads. We devise a methodology to calculate Queue Length and Waiting Time utilizing Gateway Server information to reduce response time variance in presence of bursty traffic. The most widespread contemplation is performance, because Gateway Servers must offer cost-effective and high-availability services in the elongated period, thus they have to be scaled to meet the expected load. Performance measurements can be the base for performance modeling and prediction. With the help of performance models, the performance metrics (like buffer estimation, waiting time) can be determined at the development process. This paper describes the possible queue models those can be applied in the estimation of queue length to estimate the final value of the memory size. Both simulation and experimental studies using synthesized workloads and analysis of real-world Gateway Servers demonstrate the effectiveness of the proposed system.

An Algorithm for Secure Visible Logo Embedding and Removing in Compression Domain

Digital watermarking is the process of embedding information into a digital signal which can be used in DRM (digital rights managements) system. The visible watermark (often called logo) can indicate the owner of the copyright which can often be seen in the TV program and protects the copyright in an active way. However, most of the schemes do not consider the visible watermark removing process. To solve this problem, a visible watermarking scheme with embedding and removing process is proposed under the control of a secure template. The template generates different version of watermarks which can be seen visually the same for different users. Users with the right key can completely remove the watermark and recover the original image while the unauthorized user is prevented to remove the watermark. Experiment results show that our watermarking algorithm obtains a good visual quality and is hard to be removed by the illegally users. Additionally, the authorized users can completely remove the visible watermark and recover the original image with a good quality.

Identifying Attack Code through an Ontology-Based Multiagent Tool: FROID

This paper describes the design and results of FROID, an outbound intrusion detection system built with agent technology and supported by an attacker-centric ontology. The prototype features a misuse-based detection mechanism that identifies remote attack tools in execution. Misuse signatures composed of attributes selected through entropy analysis of outgoing traffic streams and process runtime data are derived from execution variants of attack programs. The core of the architecture is a mesh of self-contained detection cells organized non-hierarchically that group agents in a functional fashion. The experiments show performance gains when the ontology is enabled as well as an increase in accuracy achieved when correlation cells combine detection evidence received from independent detection cells.

Concurrent Access to Complex Entities

In this paper we present a way of controlling the concurrent access to data in a distributed application using the Pessimistic Offline Lock design pattern. In our case, the application processes a complex entity, which contains in a hierarchical structure different other entities (objects). It will be shown how the complex entity and the contained entities must be locked in order to control the concurrent access to data.

On the Reduction of Side Effects in Tomography

As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.

A New Integer Programming Formulation for the Chinese Postman Problem with Time Dependent Travel Times

The Chinese Postman Problem (CPP) is one of the classical problems in graph theory and is applicable in a wide range of fields. With the rapid development of hybrid systems and model based testing, Chinese Postman Problem with Time Dependent Travel Times (CPPTDT) becomes more realistic than the classical problems. In the literature, we have proposed the first integer programming formulation for the CPPTDT problem, namely, circuit formulation, based on which some polyhedral results are investigated and a cutting plane algorithm is also designed. However, there exists a main drawback: the circuit formulation is only available for solving the special instances with all circuits passing through the origin. Therefore, this paper proposes a new integer programming formulation for solving all the general instances of CPPTDT. Moreover, the size of the circuit formulation is too large, which is reduced dramatically here. Thus, it is possible to design more efficient algorithm for solving the CPPTDT in the future research.

Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection

In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.

Efficient and Effective Gabor Feature Representation for Face Detection

We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.

Enhancement of Stereo Video Pairs Using SDNs To Aid In 3D Reconstruction

This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.

Concrete Mix Design Using Neural Network

Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.

Influence of Ambiguity Cluster on Quality Improvement in Image Compression

Image coding based on clustering provides immediate access to targeted features of interest in a high quality decoded image. This approach is useful for intelligent devices, as well as for multimedia content-based description standards. The result of image clustering cannot be precise in some positions especially on pixels with edge information which produce ambiguity among the clusters. Even with a good enhancement operator based on PDE, the quality of the decoded image will highly depend on the clustering process. In this paper, we introduce an ambiguity cluster in image coding to represent pixels with vagueness properties. The presence of such cluster allows preserving some details inherent to edges as well for uncertain pixels. It will also be very useful during the decoding phase in which an anisotropic diffusion operator, such as Perona-Malik, enhances the quality of the restored image. This work also offers a comparative study to demonstrate the effectiveness of a fuzzy clustering technique in detecting the ambiguity cluster without losing lot of the essential image information. Several experiments have been carried out to demonstrate the usefulness of ambiguity concept in image compression. The coding results and the performance of the proposed algorithms are discussed in terms of the peak signal-tonoise ratio and the quantity of ambiguous pixels.

Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

A NXM Version of 5X5 Playfair Cipher for any Natural Language (Urdu as Special Case)

In this paper a modified version NXM of traditional 5X5 playfair cipher is introduced which enable the user to encrypt message of any Natural language by taking appropriate size of the matrix depending upon the size of the natural language. 5X5 matrix has the capability of storing only 26 characters of English language and unable to store characters of any language having more than 26 characters. To overcome this limitation NXM matrix is introduced which solve this limitation. In this paper a special case of Urdu language is discussed. Where # is used for completing odd pair and * is used for repeating letters.

Packet Forwarding with Multiprotocol Label Switching

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Development of a Semantic Wiki-based Feature Library for the Extraction of Manufacturing Feature and Manufacturing Information

A manufacturing feature can be defined simply as a geometric shape and its manufacturing information to create the shape. In a feature-based process planning system, feature library that consists of pre-defined manufacturing features and the manufacturing information to create the shape of the features, plays an important role in the extraction of manufacturing features with their proper manufacturing information. However, to manage the manufacturing information flexibly, it is important to build a feature library that can be easily modified. In this paper, the implementation of Semantic Wiki for the development of the feature library is proposed.

Comparative Study of Virtual Sickness between a Single-screen and Three-screen from Parallax Affect

Virtual environment induces simulator sickness effect for some users. The purpose of this research is to compare the simulation sickness relative with parallax affect in one-screen and three-screen HoloStageTM system, measured by Simulation Sickness Questionnaire (SSQ). The results show the subjects tested in three-screen has less sickness than one-screen and effect from the Oculomotor (O) more than from the Disorientation (D) and more than from the Nausea (N) or represented in O>D>N.

Finding an Optimized Discriminate Function for Internet Application Recognition

Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.