Traceable Watermarking System using SoC for Digital Cinema Delivery

As the development of digital technology is increasing, Digital cinema is getting more spread. However, content copy and attack against the digital cinema becomes a serious problem. To solve the above security problem, we propose “Additional Watermarking" for digital cinema delivery system. With this proposed “Additional watermarking" method, we protect content copyrights at encoder and user side information at decoder. It realizes the traceability of the watermark embedded at encoder. The watermark is embedded into the random-selected frames using Hash function. Using it, the embedding position is distributed by Hash Function so that third parties do not break off the watermarking algorithm. Finally, our experimental results show that proposed method is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.

Analysis of Effect of Pre-Logic Factoring on Cell Based Combinatorial Logic Synthesis

In this paper, an analysis is presented, which demonstrates the effect pre-logic factoring could have on an automated combinational logic synthesis process succeeding it. The impact of pre-logic factoring for some arbitrary combinatorial circuits synthesized within a FPGA based logic design environment has been analyzed previously. This paper explores a similar effect, but with the non-regenerative logic synthesized using elements of a commercial standard cell library. On an overall basis, the results obtained pertaining to the analysis on a variety of MCNC/IWLS combinational logic benchmark circuits indicate that pre-logic factoring has the potential to facilitate simultaneous power, delay and area optimized synthesis solutions in many cases.

Object Allocation with Replication in Distributed Systems

The design of distributed systems involves dividing the system into partitions (or components) and then allocating these partitions to physical nodes. There have been several techniques proposed for both the partitioning and allocation processes. These existing techniques suffer from a number of limitations including lack of support for replication. Replication is difficult to use effectively but has the potential to greatly improve the performance of a distributed system. This paper presents a new technique technique for allocating objects in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system. The performance of the new technique is compared with the performance of an existing technique in order to demonstrate both the validity and superiority of the new technique when developing a distributed system that can utilise object replication.

Application of Biometrics to Obtain High Entropy Cryptographic Keys

In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%.

Minimizing Examinee Collusion with a Latin- Square Treatment Structure

Cheating on standardized tests has been a major concern as it potentially minimizes measurement precision. One major way to reduce cheating by collusion is to administer multiple forms of a test. Even with this approach, potential collusion is still quite large. A Latin-square treatment structure for distributing multiple forms is proposed to further reduce the colluding potential. An index to measure the extent of colluding potential is also proposed. Finally, with a simple algorithm, the various Latin-squares were explored to find the best structure to keep the colluding potential to a minimum.

Design of PID Controller for Higher Order Continuous Systems using MPSO based Model Formulation Technique

This paper proposes a new algebraic scheme to design a PID controller for higher order linear time invariant continuous systems. Modified PSO (MPSO) based model order formulation techniques have applied to obtain the effective formulated second order system. A controller is tuned to meet the desired performance specification by using pole-zero cancellation method. Proposed PID controller is attached with both higher order system and formulated second order system. The closed loop response is observed for stabilization process and compared with general PSO based formulated second order system. The proposed method is illustrated through numerical example from literature.

Benchmarking: Performance on ALPS and Formosa Clusters

This paper presents the benchmarking results and performance evaluation of differentclustersbuilt atthe National Center for High-Performance Computingin Taiwan. Performance of processor, memory subsystem andinterconnect is a critical factor in the overall performance of high performance computing platforms. The evaluation compares different system architecture and software platforms. Most supercomputer used HPL to benchmark their system performance, in accordance with the requirement of the TOP500 List. In this paper we consider system memory access factors that affect benchmark performance, such as processor and memory performance.We hope these works will provide useful information for future development and construct cluster system.

Curbing Cybercrime by Application of Internet Users’ Identification System (IUIS) in Nigeria

Cybercrime is now becoming a big challenge in Nigeria apart from the traditional crime. Inability to identify perpetrators is one of the reasons for the growing menace. This paper proposes a design for monitoring internet users’ activities in order to curbing cybercrime. It requires redefining the operations of Internet Service Providers (ISPs) which will now mandate users to be authenticated before accessing the internet. In implementing this work which can be adapted to a larger scale, a virtual router application is developed and configured to mimic a real router device. A sign-up portal is developed to allow users to register with the ISP. The portal asks for identification information which will include bio-data and government issued identification data like National Identity Card number, et cetera. A unique username and password are chosen by the user to enable access to the internet which will be used to reference him to an Internet Protocol Address (IP Address) of any system he uses on the internet and thereby associating him to any criminal act related to that IP address at that particular time. Questions such as “What happen when another user knows the password and uses it to commit crime?” and other pertinent issues are addressed.

Refinement of Object-Z Specifications Using Morgan-s Refinement Calculus

Morgan-s refinement calculus (MRC) is one of the well-known methods allowing the formality presented in the program specification to be continued all the way to code. On the other hand, Object-Z (OZ) is an extension of Z adding support for classes and objects. There are a number of methods for obtaining code from OZ specifications that can be categorized into refinement and animation methods. As far as we know, only one refinement method exists which refines OZ specifications into code. However, this method does not have fine-grained refinement rules and thus cannot be automated. On the other hand, existing animation methods do not present mapping rules formally and do not support the mapping of several important constructs of OZ, such as all cases of operation expressions and most of constructs in global paragraph. In this paper, with the aim of providing an automatic path from OZ specifications to code, we propose an approach to map OZ specifications into their counterparts in MRC in order to use fine-grained refinement rules of MRC. In this way, having counterparts of our specifications in MRC, we can refine them into code automatically using MRC tools such as RED. Other advantages of our work pertain to proposing mapping rules formally, supporting the mapping of all important constructs of Object-Z, and considering dynamic instantiation of objects while OZ itself does not cover this facility.

A Software-Supported Methodology for Designing General-Purpose Interconnection Networks for Reconfigurable Architectures

Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.

A Parallel Architecture for the Real Time Correction of Stereoscopic Images

In this paper, we will present an architecture for the implementation of a real time stereoscopic images correction's approach. This architecture is parallel and makes use of several memory blocs in which are memorized pre calculated data relating to the cameras used for the acquisition of images. The use of reduced images proves to be essential in the proposed approach; the suggested architecture must so be able to carry out the real time reduction of original images.

Self-Organization of Clusters Having Locally Distributed Patterns for Highly Synchronized Inputs

Many experimental results suggest that more precise spike timing is significant in neural information processing. We construct a self-organization model using the spatiotemporal pat-terns, where Spike-Timing Dependent Plasticity (STDP) tunes the conduction delays between neurons. We show that, for highly syn-chronized inputs, the fluctuation of conduction delays causes globally continuous and locally distributed firing patterns through the self-organization.

A New Method for Multiobjective Optimization Based on Learning Automata

The necessity of solving multi dimensional complicated scientific problems beside the necessity of several objective functions optimization are the most motive reason of born of artificial intelligence and heuristic methods. In this paper, we introduce a new method for multiobjective optimization based on learning automata. In the proposed method, search space divides into separate hyper-cubes and each cube is considered as an action. After gathering of all objective functions with separate weights, the cumulative function is considered as the fitness function. By the application of all the cubes to the cumulative function, we calculate the amount of amplification of each action and the algorithm continues its way to find the best solutions. In this Method, a lateral memory is used to gather the significant points of each iteration of the algorithm. Finally, by considering the domination factor, pareto front is estimated. Results of several experiments show the effectiveness of this method in comparison with genetic algorithm based method.

A Robust Data Hiding Technique based on LSB Matching

Many researchers are working on information hiding techniques using different ideas and areas to hide their secrete data. This paper introduces a robust technique of hiding secret data in image based on LSB insertion and RSA encryption technique. The key of the proposed technique is to encrypt the secret data. Then the encrypted data will be converted into a bit stream and divided it into number of segments. However, the cover image will also be divided into the same number of segments. Each segment of data will be compared with each segment of image to find the best match segment, in order to create a new random sequence of segments to be inserted then in a cover image. Experimental results show that the proposed technique has a high security level and produced better stego-image quality.

Handwritten Character Recognition Using Multiscale Neural Network Training Technique

Advancement in Artificial Intelligence has lead to the developments of various “smart" devices. Character recognition device is one of such smart devices that acquire partial human intelligence with the ability to capture and recognize various characters in different languages. Firstly multiscale neural training with modifications in the input training vectors is adopted in this paper to acquire its advantage in training higher resolution character images. Secondly selective thresholding using minimum distance technique is proposed to be used to increase the level of accuracy of character recognition. A simulator program (a GUI) is designed in such a way that the characters can be located on any spot on the blank paper in which the characters are written. The results show that such methods with moderate level of training epochs can produce accuracies of at least 85% and more for handwritten upper case English characters and numerals.

MAP-Based Image Super-resolution Reconstruction

From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.

An Enhanced Cryptanalytic Attack on Knapsack Cipher using Genetic Algorithm

With the exponential growth of networked system and application such as eCommerce, the demand for effective internet security is increasing. Cryptology is the science and study of systems for secret communication. It consists of two complementary fields of study: cryptography and cryptanalysis. The application of genetic algorithms in the cryptanalysis of knapsack ciphers is suggested by Spillman [7]. In order to improve the efficiency of genetic algorithm attack on knapsack cipher, the previously published attack was enhanced and re-implemented with variation of initial assumptions and results are compared with Spillman results. The experimental result of research indicates that the efficiency of genetic algorithm attack on knapsack cipher can be improved with variation of initial assumption.

Investigation of Time Delay Factors in Global Software Development

Global Software Development (GSD) projects are passing through different boundaries of a company, country and even in other continents where time zone differs between both sites. Beside many benefits of such development, research declared plenty of negative impacts on these GSD projects. It is important to understand problems which may lie during the execution of GSD project with different time zones. This research project discussed and provided different issues related to time delays in GSD projects. In this paper, authors investigated some of the time delay factors which usually lie in GSD projects with different time zones. This investigation is done through systematic review of literature. Furthermore, the practices to overcome these delay factors which have already been reported in literature and GSD organizations are also explored through literature survey and case studies.

Theoretical Considerations for Software Component Metrics

We have defined two suites of metrics, which cover static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilisation of various components. In this paper both static and dynamic metrics are evaluated using Weyuker-s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall-s Quality Model and illustrate their impact on product quality and to the management of component-based product development.