Objective Performance of Compressed Image Quality Assessments

Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).

Information Security in E-Learning through Identification of Humans

During recent years, the traditional learning approaches have undergone fundamental changes due to the emergence of new technologies such as multimedia, hypermedia and telecommunication. E-learning is a modern world phenomenon that has come into existence in the information age and in a knowledgebased society. E-learning has developed significantly within a short period of time. Thus it is of a great significant to secure information, allow a confident access and prevent unauthorized accesses. Making use of individuals- physiologic or behavioral (biometric) properties is a confident method to make the information secure. Among the biometrics, fingerprint is more acceptable and most countries use it as an efficient methods of identification. This article provides a new method to compare the fingerprint comparison by pattern recognition and image processing techniques. To verify fingerprint, the shortest distance method is used together with perceptronic multilayer neural network functioning based on minutiae. This method is highly accurate in the extraction of minutiae and it accelerates comparisons due to elimination of false minutiae and is more reliable compared with methods that merely use directional images.

Design of Reliable and Low Cost Substrate Heater for Thin Film Deposition

The substrate heater designed for this investigation is a front side substrate heating system. It consists of 10 conventional tungsten halogen lamps and an aluminum reflector, total input electrical power of 5 kW. The substrate is heated by means of a radiation from conventional tungsten halogen lamps directed to the substrate through a glass window. This design allows easy replacement of the lamps and maintenance of the system. Within 2 to 6 minutes the substrate temperature reaches 500 to 830 C by varying the vertical distance between the glass window and the substrate holder. Moreover, the substrate temperature can be easily controlled by controlling the input power to the system. This design gives excellent opportunity to deposit many deferent films at deferent temperatures in the same deposition time. This substrate heater was successfully used for Chemical Vapor Deposition (CVD) of many thin films, such as Silicon, iron, etc.

Overload Control in a SIP Signaling Network

The Internet telephony employs a new type of Internet communication on which a mutual communication is realized by establishing sessions. Session Initiation Protocol (SIP) is used to establish sessions between end-users. For unreliable transmission (UDP), SIP message should be retransmitted when it is lost. The retransmissions increase a load of the SIP signaling network, and sometimes lead to performance degradation when a network is overloaded. The paper proposes an overload control for a SIP signaling network to protect from a performance degradation. Introducing two thresholds in a queue of a SIP proxy server, the SIP proxy server detects a congestion. Once congestion is detected, a SIP signaling network restricts to make new calls. The proposed overload control is evaluated using the network simulator (ns-2). With simulation results, the paper shows the proposed overload control works well.

Influence of Non-Structural Elements on Dynamic Response of Multi-Storey Rc Building to Mining Shock

In the paper the results of calculations of the dynamic response of a multi-storey reinforced concrete building to a strong mining shock originated from the main region of mining activity in Poland (i.e. the Legnica-Glogow Copper District) are presented. The representative time histories of accelerations registered in three directions were used as ground motion data in calculations of the dynamic response of the structure. Two variants of a numerical model were applied: the model including only structural elements of the building and the model including both structural and non-structural elements (i.e. partition walls and ventilation ducts made of brick). It turned out that non-structural elements of multi-storey RC buildings have a small impact of about 10 % on natural frequencies of these structures. It was also proved that the dynamic response of building to mining shock obtained in case of inclusion of all non-structural elements in the numerical model is about 20 % smaller than in case of consideration of structural elements only. The principal stresses obtained in calculations of dynamic response of multi-storey building to strong mining shock are situated on the level of about 30% of values obtained from static analysis (dead load).

Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks

Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.

Numerical Simulation of CNT Incorporated Cement

Cement, the most widely used construction material is very brittle and characterized by low tensile strength and strain capacity. Macro to nano fibers are added to cement to provide tensile strength and ductility to it. Carbon Nanotube (CNT), one of the nanofibers, has proven to be a promising reinforcing material in the cement composites because of its outstanding mechanical properties and its ability to close cracks at the nano level. The experimental investigations for CNT reinforced cement is costly, time consuming and involves huge number of trials. Mathematical modeling of CNT reinforced cement can be done effectively and efficiently to arrive at the mechanical properties and to reduce the number of trials in the experiments. Hence, an attempt is made to numerically study the effective mechanical properties of CNT reinforced cement numerically using Representative Volume Element (RVE) method. The enhancement in its mechanical properties for different percentage of CNTs is studied in detail.

Immobilization of Aspergillus awamori 1-8 for Subsequent Pectinase Production

The overall objective of this research is a strain improvement technology for efficient pectinase production. A novel cells cultivation technology by immobilization of fungal cells has been studied in long time continuous fermentations. Immobilization was achieved by using of new material for absorption of stores of immobilized cultures which was for the first time used for immobilization of microorganisms. Effects of various conditions of nitrogen and carbon nutrition on the biosynthesis of pectolytic enzymes in Aspergillus awamori 1-8 strain were studied. Proposed cultivation technology along with optimization of media components for pectinase overproduction led to increased pectinase productivity in Aspergillus awamori 1-8 from 7 to 8 times. Proposed technology can be applied successfully for production of major industrial enzymes such as α-amylase, protease, collagenase etc.

Multidimensional Visualization Tools for Analysis of Expression Data

Expression data analysis is based mostly on the statistical approaches that are indispensable for the study of biological systems. Large amounts of multidimensional data resulting from the high-throughput technologies are not completely served by biostatistical techniques and are usually complemented with visual, knowledge discovery and other computational tools. In many cases, in biological systems we only speculate on the processes that are causing the changes, and it is the visual explorative analysis of data during which a hypothesis is formed. We would like to show the usability of multidimensional visualization tools and promote their use in life sciences. We survey and show some of the multidimensional visualization tools in the process of data exploration, such as parallel coordinates and radviz and we extend them by combining them with the self-organizing map algorithm. We use a time course data set of transitional cell carcinoma of the bladder in our examples. Analysis of data with these tools has the potential to uncover additional relationships and non-trivial structures.

Biometric Methods and Implementation of Algorithms

Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects.

A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Interoperability in Component Based Software Development

The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.

Effects of Multimedia-based Instructional Designs for Arabic Language Learning among Pupils of Different Achievement Levels

The purpose of this study is to investigate the effects of modality principles in instructional software among first grade pupils- achievements in the learning of Arabic Language. Two modes of instructional software were systematically designed and developed, audio with images (AI), and text with images (TI). The quasi-experimental design was used in the study. The sample consisted of 123 male and female pupils from IRBED Education Directorate, Jordan. The pupils were randomly assigned to any one of the two modes. The independent variable comprised the two modes of the instructional software, the students- achievement levels in the Arabic Language class and gender. The dependent variable was the achievements of the pupils in the Arabic Language test. The theoretical framework of this study was based on Mayer-s Cognitive Theory of Multimedia Learning. Four hypotheses were postulated and tested. Analyses of Variance (ANOVA) showed that pupils using the (AI) mode performed significantly better than those using (TI) mode. This study concluded that the audio with images mode was an important aid to learning as compared to text with images mode.

Research of the Main Indexes of Freshness Anchovy (Engraulis engrasicolus Linnaeus, 1758) and Sardines (Sardina pilchardus Walbaum 1792) of Mediterranean

Anchovy (Engraulis Encrasicholus) and sardine (Sardina Pilchardus) are blue fishes linked to our alimentary tradition of Mediterranean. In our work, particularly, we tested for the first time physical and enzymatic methods to verify the freshness of species of blue fish, anchovy and sardine of Mediterranean. In connection with to the lowering of the pH after post-mortem stage we assisted to a increase in proteolytic activity of calpaine and catpsine. Already after 2 h in post-mortem there was a significant increase.

The Hardware Implementation of a Novel Genetic Algorithm

This paper presents a novel genetic algorithm, termed the Optimum Individual Monogenetic Algorithm (OIMGA) and describes its hardware implementation. As the monogenetic strategy retains only the optimum individual, the memory requirement is dramatically reduced and no crossover circuitry is needed, thereby ensuring the requisite silicon area is kept to a minimum. Consequently, depending on application requirements, OIMGA allows the investigation of solutions that warrant either larger GA populations or individuals of greater length. The results given in this paper demonstrate that both the performance of OIMGA and its convergence time are superior to those of existing hardware GA implementations. Local convergence is achieved in OIMGA by retaining elite individuals, while population diversity is ensured by continually searching for the best individuals in fresh regions of the search space.

An Investigation on Efficient Spreading Codes for Transmitter Based Techniques to Mitigate MAI and ISI in TDD/CDMA Downlink

We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).

Spreading Dynamics of a Viral Infection in a Complex Network

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Complexity Analysis of Some Known Graph Coloring Instances

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

A Microstrip Antenna Design and Performance Analysis for RFID High Bit Rate Applications

Lately, an interest has grown greatly in the usages of RFID in an un-presidential applications. It is shown in the adaptation of major software companies such as Microsoft, IBM, and Oracle the RFID capabilities in their major software products. For example Microsoft SharePoints 2010 workflow is now fully compatible with RFID platform. In addition, Microsoft BizTalk server is also capable of all RFID sensors data acquisition. This will lead to applications that required high bit rate, long range and a multimedia content in nature. Higher frequencies of operation have been designated for RFID tags, among them are the 2.45 and 5.8 GHz. The higher the frequency means higher range, and higher bit rate, but the drawback is the greater cost. In this paper we present a single layer, low profile patch antenna operates at 5.8 GHz with pure resistive input impedance of 50 and close to directive radiation. Also, we propose a modification to the design in order to improve the operation band width from 8.7 to 13.8