The Tag Authentication Scheme using Self-Shrinking Generator on RFID System

Since communications between tag and reader in RFID system are by radio, anyone can access the tag and obtain its any information. And a tag always replies with the same ID so that it is hard to distinguish between a real and a fake tag. Thus, there are many security problems in today-s RFID System. Firstly, unauthorized reader can easily read the ID information of any Tag. Secondly, Adversary can easily cheat the legitimate reader using the collected Tag ID information, such as the any legitimate Tag. These security problems can be typically solved by encryption of messages transmitted between Tag and Reader and by authentication for Tag. In this paper, to solve these security problems on RFID system, we propose the Tag Authentication Scheme based on self shrinking generator (SSG). SSG Algorithm using in our scheme is proposed by W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is organized that only one LFSR and selection logic in order to generate random stream. Thus it is optimized to implement the hardware logic on devices with extremely limited resource, and the output generating from SSG at each time do role as random stream so that it is allow our to design the light-weight authentication scheme with security against some network attacks. Therefore, we propose the novel tag authentication scheme which use SSG to encrypt the Tag-ID transmitted from tag to reader and achieve authentication of tag.

Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Statistics over Lyapunov Exponents for Feature Extraction: Electroencephalographic Changes Detection Case

A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.

Investigation on Some Ergonomics and Psychological Strains of Common Militarism Protective Clothing

Protective clothing limits heat transfer and hampers task performance due to the increased weight. Militarism protective clothing enables humans to operate in adverse environments. In the selection and evaluation of militarism protective clothing attention should be given to heat strain, ergonomic and fit issues next to the actual protection it offers. Fifty Male healthy subjects participated in the study. The subjects were dressed in shorts, T-shirts, socks, sneakers and four deferent kinds of militarism protective clothing such as CS, CSB, CS with NBC protection and CS with NBC- protection added. Ergonomically and psychological strains of every four cloths were investigated on subjects by walking on a treadmill (7km/hour) with a 19.7 kg backpack. As a result of these tests were showed that, the highest heart rate was found wearing the NBC-protection added outfit, the highest temperatures were observed wearing NBCprotection added, followed by respectively CS with NBC protection, CSB and CS and the highest value for thermal comfort (implying worst thermal comfort) was observed wearing NBC-protection added.

An Economical Operation Analysis Optimization Model for Heavy Equipment Selection

Optimizing equipment selection in heavy earthwork operations is a critical key in the success of any construction project. The objective of this research incentive was geared towards developing a computer model to assist contractors and construction managers in estimating the cost of heavy earthwork operations. Economical operation analysis was conducted for an equipment fleet taking into consideration the owning and operating costs involved in earthwork operations. The model is being developed in a Microsoft environment and is capable of being integrated with other estimating and optimization models. In this study, Caterpillar® Performance Handbook [5] was the main resource used to obtain specifications of selected equipment. The implementation of the model shall give optimum selection of equipment fleet not only based on cost effectiveness but also in terms of versatility. To validate the model, a case study of an actual dam construction project was selected to quantify its degree of accuracy.

A New Heuristic Approach for Optimal Network Reconfiguration in Distribution Systems

This paper presents a novel approach for optimal reconfiguration of radial distribution systems. Optimal reconfiguration involves the selection of the best set of branches to be opened, one each from each loop, such that the resulting radial distribution system gets the desired performance. In this paper an algorithm is proposed based on simple heuristic rules and identified an effective switch status configuration of distribution system for the minimum loss reduction. This proposed algorithm consists of two parts; one is to determine the best switching combinations in all loops with minimum computational effort and the other is simple optimum power loss calculation of the best switching combination found in part one by load flows. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on 33-bus system. The results show that the performance of the proposed method is better than that of the other methods.

A Novel Architecture for Wavelet based Image Fusion

In this paper, we focus on the fusion of images from different sources using multiresolution wavelet transforms. Based on reviews of popular image fusion techniques used in data analysis, different pixel and energy based methods are experimented. A novel architecture with a hybrid algorithm is proposed which applies pixel based maximum selection rule to low frequency approximations and filter mask based fusion to high frequency details of wavelet decomposition. The key feature of hybrid architecture is the combination of advantages of pixel and region based fusion in a single image which can help the development of sophisticated algorithms enhancing the edges and structural details. A Graphical User Interface is developed for image fusion to make the research outcomes available to the end user. To utilize GUI capabilities for medical, industrial and commercial activities without MATLAB installation, a standalone executable application is also developed using Matlab Compiler Runtime.

A Soft Set based Group Decision Making Method with Criteria Weight

Molodstov-s soft sets theory was originally proposed as general mathematical tool for dealing with uncertainty problems. The matrix form has been introduced in soft set and some of its properties have been discussed. However, the formulation of soft matrix in group decision making problem only with equal importance weights of criteria, which does not show the true opinion of decision maker on each criteria. The aim of this paper is to propose a method for solving group decision making problem incorporating the importance of criteria by using soft matrices in a more objective manner. The weight of each criterion is calculated by using the Analytic Hierarchy Process (AHP) method. An example of house selection process is given to illustrate the effectiveness of the proposed method.

District Selection for Geotechnical Settlement Suitability Using GIS and Multi Criteria Decision Analysis: A Case Study in Denizli, Turkey

Multi criteria decision analysis (MDCA) covers both data and experience. It is very common to solve the problems with many parameters and uncertainties. GIS supported solutions improve and speed up the decision process. Weighted grading as a MDCA method is employed for solving the geotechnical problems. In this study, geotechnical parameters namely soil type; SPT (N) blow number, shear wave velocity (Vs) and depth of underground water level (DUWL) have been engaged in MDCA and GIS. In terms of geotechnical aspects, the settlement suitability of the municipal area was analyzed by the method. MDCA results were compatible with the geotechnical observations and experience. The method can be employed in geotechnical oriented microzoning studies if the criteria are well evaluated.

Selection and Exergy Analysis of Fuel Cell System to Meet all Energy Needs of Residential Buildings

In this paper a polymer electrolyte membrane (PEM) fuel cell power system including burner, steam reformer, heat exchanger and water heater has been considered to meet the electrical, heating, cooling and domestic hot water loads of residential building which in Tehran. The system uses natural gas as fuel and works in CHP mode. Design and operating conditions of a PEM fuel cell system is considered in this study. The energy requirements of residential building and the number of fuel cell stacks to meet them have been estimated. The method involved exergy analysis and entropy generation thorough the months of the year. Results show that all the energy needs of the building can be met with 12 fuel cell stacks at a nominal capacity of 8.5 kW. Exergy analysis of the CHP system shows that the increase in the ambient air temperature from 1oC to 40oC, will have an increase of entropy generation by 5.73%.Maximum entropy generates for 15 hour in 15th of June and 15th of July is estimated to amount at 12624 (kW/K). Entropy generation of this system through a year is estimated to amount to 1004.54 GJ/k.year.

Mutation Rate for Evolvable Hardware

Evolvable hardware (EHW) refers to a selfreconfiguration hardware design, where the configuration is under the control of an evolutionary algorithm (EA). A lot of research has been done in this area several different EA have been introduced. Every time a specific EA is chosen for solving a particular problem, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade a lot of research has been carried out in order to identify the best parameters for the EA-s components for different “test-problems". However different researchers propose different solutions. In this paper the behaviour of mutation rate on (1+λ) evolution strategy (ES) for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies values of the logic cell inputs, the cell type (for example from AND to NOR) and the circuit output. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates used for the evolved circuits. The experimental results found provide the behaviour of the mutation rate to be used during evolution for the design and optimization of logic circuits. The researches on the best mutation rate during the last 40 years are also summarized.

Iterative Process to Improve Simple Adaptive Subdivision Surfaces Method with Butterfly Scheme

Subdivision surfaces were applied to the entire meshes in order to produce smooth surfaces refinement from coarse mesh. Several schemes had been introduced in this area to provide a set of rules to converge smooth surfaces. However, to compute and render all the vertices are really inconvenient in terms of memory consumption and runtime during the subdivision process. It will lead to a heavy computational load especially at a higher level of subdivision. Adaptive subdivision is a method that subdivides only at certain areas of the meshes while the rest were maintained less polygons. Although adaptive subdivision occurs at the selected areas, the quality of produced surfaces which is their smoothness can be preserved similar as well as regular subdivision. Nevertheless, adaptive subdivision process burdened from two causes; calculations need to be done to define areas that are required to be subdivided and to remove cracks created from the subdivision depth difference between the selected and unselected areas. Unfortunately, the result of adaptive subdivision when it reaches to the higher level of subdivision, it still brings the problem with memory consumption. This research brings to iterative process of adaptive subdivision to improve the previous adaptive method that will reduce memory consumption applied on triangular mesh. The result of this iterative process was acceptable better in memory and appearance in order to produce fewer polygons while it preserves smooth surfaces.

Deployment of Service Quality Characteristics

This work discusses an innovative methodology for deployment of service quality characteristics. Four groups of organizational features that may influence the quality of services are identified: human resource, technology, planning, and organizational relationships. A House of Service Quality (HOSQ) matrix is built to extract the desired improvement in the service quality characteristics and to translate them into a hierarchy of important organizational features. The Mean Square Error (MSE) criterion enables the pinpointing of the few essential service quality characteristics to be improved as well as selection of the vital organizational features. The method was implemented in an engineering supply enterprise and provides useful information on its vital service dimensions.

The Effect of Increment in Simulation Samples on a Combined Selection Procedure

Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.

The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion

Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.

Distributed Relay Selection and Channel Choice in Cognitive Radio Network

In this paper, we study the cooperative communications where multiple cognitive radio (CR) transmit-receive pairs competitive maximize their own throughputs. In CR networks, the influences of primary users and the spectrum availability are usually different among CR users. Due to the existence of multiple relay nodes and the different spectrum availability, each CR transmit-receive pair should not only select the relay node but also choose the appropriate channel. For this distributed problem, we propose a game theoretic framework to formulate this problem and we apply a regret-matching learning algorithm which is leading to correlated equilibrium. We further formulate a modified regret-matching learning algorithm which is fully distributed and only use the local information of each CR transmit-receive pair. This modified algorithm is more practical and suitable for the cooperative communications in CR network. Simulation results show the algorithm convergence and the modified learning algorithm can achieve comparable performance to the original regretmatching learning algorithm.

On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation

Support vector machines (SVMs) are considered to be the best machine learning algorithms for minimizing the predictive probability of misclassification. However, their drawback is that for large data sets the computation of the optimal decision boundary is a time consuming function of the size of the training set. Hence several methods have been proposed to speed up the SVM algorithm. Here three methods used to speed up the computation of the SVM classifiers are compared experimentally using a musical genre classification problem. The simplest method pre-selects a random sample of the data before the application of the SVM algorithm. Two additional methods use proximity graphs to pre-select data that are near the decision boundary. One uses k-Nearest Neighbor graphs and the other Relative Neighborhood Graphs to accomplish the task.

A Subjectively Influenced Router for Vehicles in a Four-Junction Traffic System

A subjectively influenced router for vehicles in a fourjunction traffic system is presented. The router is based on a 3-layer Backpropagation Neural Network (BPNN) and a greedy routing procedure. The BPNN detects priorities of vehicles based on the subjective criteria. The subjective criteria and the routing procedure depend on the routing plan towards vehicles depending on the user. The routing procedure selects vehicles from their junctions based on their priorities and route them concurrently to the traffic system. That is, when the router is provided with a desired vehicles selection criteria and routing procedure, it routes vehicles with a reasonable junction clearing time. The cost evaluation of the router determines its efficiency. In the case of a routing conflict, the router will route the vehicles in a consecutive order and quarantine faulty vehicles. The simulations presented indicate that the presented approach is an effective strategy of structuring a subjective vehicle router.

Enhanced Performance of Fading Dispersive Channel Using Dynamic Frequency Hopping(DFH)

techniques are examined to overcome the performance degradation caused by the channel dispersion using slow frequency hopping (SFH) with dynamic frequency hopping (DFH) pattern adaptation. In DFH systems, the frequency slots are selected by continuous quality monitoring of all frequencies available in a system and modification of hopping patterns for each individual link based on replacing slots which its signal to interference ratio (SIR) measurement is below a required threshold. Simulation results will show the improvements in BER obtained by DFH in comparison with matched frequency hopping (MFH), random frequency hopping (RFH) and multi-carrier code division multiple access (MC-CDMA) in multipath slowly fading dispersive channels using a generalized bandpass two-path transfer function model, and will show the improvement obtained according to the threshold selection.

A Semi-Fragile Watermarking Scheme for Color Image Authentication

In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.