Fuzzy Fingerprint Vault using Multiple Polynomials

Fuzzy fingerprint vault is a recently developed cryptographic construct based on the polynomial reconstruction problem to secure critical data with the fingerprint data. However, the previous researches are not applicable to the fingerprint having a few minutiae since they use a fixed degree of the polynomial without considering the number of fingerprint minutiae. To solve this problem, we use an adaptive degree of the polynomial considering the number of minutiae extracted from each user. Also, we apply multiple polynomials to avoid the possible degradation of the security of a simple solution(i.e., using a low-degree polynomial). Based on the experimental results, our method can make the possible attack difficult 2192 times more than using a low-degree polynomial as well as verify the users having a few minutiae.

Trust Managementfor Pervasive Computing Environments

Trust is essential for further and wider acceptance of contemporary e-services. It was first addressed almost thirty years ago in Trusted Computer System Evaluation Criteria standard by the US DoD. But this and other proposed approaches of that period were actually solving security. Roughly some ten years ago, methodologies followed that addressed trust phenomenon at its core, and they were based on Bayesian statistics and its derivatives, while some approaches were based on game theory. However, trust is a manifestation of judgment and reasoning processes. It has to be dealt with in accordance with this fact and adequately supported in cyber environment. On the basis of the results in the field of psychology and our own findings, a methodology called qualitative algebra has been developed, which deals with so far overlooked elements of trust phenomenon. It complements existing methodologies and provides a basis for a practical technical solution that supports management of trust in contemporary computing environments. Such solution is also presented at the end of this paper.

Application of Reliability Prediction Model Adapted for the Analysis of the ERP System

This paper presents the possibilities of using Weibull statistical distribution in modeling the distribution of defects in ERP systems. There follows a case study, which examines helpdesk records of defects that were reported as the result of one ERP subsystem upgrade. The result of the applied modeling is in modeling the reliability of the ERP system from a user perspective with estimated parameters like expected maximum number of defects in one day or predicted minimum of defects between two upgrades. Applied measurement-based analysis framework is proved to be suitable in predicting future states of the reliability of the observed ERP subsystems.

A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Challenges of Irrigation Water Supply in Croplands of Arid Regions and their Environmental Consequences – A Case Study in the Dez and Moghan Command Areas of Iran

Renewable water resources are crucial production variables in arid and semi-arid regions where intensive agriculture is practiced to meet ever-increasing demand for food and fiber. This is crucial for the Dez and Moghan command areas where water delivery problems and adverse environmental issues are widespread. This paper aims to identify major problems areas using on-farm surveys of 200 farmers, agricultural extensionists and water suppliers which was complemented by secondary data and field observations during 2010- 2011 cultivating season. The SPSS package was used to analyze and synthesis data. Results indicated inappropriate canal operations in both schemes, though there was no unanimity about the underlying causes. Inequitable and inflexible distribution was found to be rooted in deficient hydraulic structures particularly in the main and secondary canals. The inadequacy and inflexibility of water scheduling regime was the underlying causes of recurring pest and disease spread which often led to the decline of crop yield and quality, although these were not disputed, the water suppliers were not prepared to link with the deficiencies in the operation of the main and secondary canals. They rather attributed these to the prevailing salinity; alkalinity, water table fluctuations and leaching of the valuable agro-chemical inputs from the plants- route zone with farreaching consequences. Examples of these include the pollution of ground and surface resources due to over-irrigation at the farm level which falls under the growers- own responsibility. Poor irrigation efficiency and adverse environmental problems were attributed to deficient and outdated farming practices that were in turn rooted in poor extension programs and irrational water charges.

A New Hybrid Optimization Method for Optimum Distribution Capacitor Planning

This work presents a new algorithm based on a combination of fuzzy (FUZ), Dynamic Programming (DP), and Genetic Algorithm (GA) approach for capacitor allocation in distribution feeders. The problem formulation considers two distinct objectives related to total cost of power loss and total cost of capacitors including the purchase and installation costs. The novel formulation is a multi-objective and non-differentiable optimization problem. The proposed method of this article uses fuzzy reasoning for sitting of capacitors in radial distribution feeders, DP for sizing and finally GA for finding the optimum shape of membership functions which are used in fuzzy reasoning stage. The proposed method has been implemented in a software package and its effectiveness has been verified through a 9-bus radial distribution feeder for the sake of conclusions supports. A comparison has been done among the proposed method of this paper and similar methods in other research works that shows the effectiveness of the proposed method of this paper for solving optimum capacitor planning problem.

Data Gathering Protocols for Wireless Sensor Networks

Sensor network applications are often data centric and involve collecting data from a set of sensor nodes to be delivered to various consumers. Typically, nodes in a sensor network are resource-constrained, and hence the algorithms operating in these networks must be efficient. There may be several algorithms available implementing the same service, and efficient considerations may require a sensor application to choose the best suited algorithm. In this paper, we present a systematic evaluation of a set of algorithms implementing the data gathering service. We propose a modular infrastructure for implementing such algorithms in TOSSIM with separate configurable modules for various tasks such as interest propagation, data propagation, aggregation, and path maintenance. By appropriately configuring these modules, we propose a number of data gathering algorithms, each of which incorporates a different set of heuristics for optimizing performance. We have performed comprehensive experiments to evaluate the effectiveness of these heuristics, and we present results from our experimentation efforts.

In vitro Anti-tubercular Screening of Newly Synthesized Benzimidazole Derivatives

A series of 1-(1H-benzimidazol-2-yl)-3-(substituted phenyl)-2-propen-1-one were allowed to react with hydrazine hydrate and phenyl hydrazine in submitted reactions to get pyrazoline and phenyl pyrazoline derivatives. All the compounds entered for screening at the Tuberculosis Antimicrobial Acquisition and Coordinating Facility (TAACF) for their in vitro antibacterial activity against Mycobacterium tuberculosis H37Rv strain (ATCC 27294) using Microplate Alamar Blue Assay (MABA) susceptibility test. The results expressed as MIC (minimum inhibitory concentration) in μg/mL. Among the fifteen compounds, eight compounds were found to have MIC values less than 10 μg/mL. These were subjected for cytotoxicity assay in VERO cells to determine CC50 (cytotoxic concentration 50%) values and finally SI (Selectivity Index) were calculated. Compound (XV) 2-[5-(4- fluorophenyl)-1-phenyl-4,5-dihydro-1H-3-pyrazolyl]-1Hbenzimidazole was considered the best candidate of the series that could be a good starting point to develop new lead compounds in the fight against tuberculosis.

Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

A Study of Under Actuator Dynamic System by Comparing between Minimum Energy and Minimum Jerk Problems

This paper deals with under actuator dynamic systems such as spring-mass-damper system when the number of control variable is less than the number of state variable. In order to apply optimal control, the controllability must be checked. There are many objective functions to be selected as the goal of the optimal control such as minimum energy, maximum energy and minimum jerk. As the objective function is the first priority, if one like to have the second goal to be applied; however, it could not fit in the objective function format and also avoiding the vector cost for the objective, this paper will illustrate the problem of under actuator dynamic systems with the easiest to deal with comparing between minimum energy and minimum jerk.

3D Network-on-Chip with on-Chip DRAM: An Empirical Analysis for Future Chip Multiprocessor

With the increasing number of on-chip components and the critical requirement for processing power, Chip Multiprocessor (CMP) has gained wide acceptance in both academia and industry during the last decade. However, the conventional bus-based onchip communication schemes suffer from very high communication delay and low scalability in large scale systems. Network-on-Chip (NoC) has been proposed to solve the bottleneck of parallel onchip communications by applying different network topologies which separate the communication phase from the computation phase. Observing that the memory bandwidth of the communication between on-chip components and off-chip memory has become a critical problem even in NoC based systems, in this paper, we propose a novel 3D NoC with on-chip Dynamic Random Access Memory (DRAM) in which different layers are dedicated to different functionalities such as processors, cache or memory. Results show that, by using our proposed architecture, average link utilization has reduced by 10.25% for SPLASH-2 workloads. Our proposed design costs 1.12% less execution cycles than the traditional design on average.

Developing a Statistical Model for Electromagnetic Environment for Mobile Wireless Networks

The analysis of electromagnetic environment using deterministic mathematical models is characterized by the impossibility of analyzing a large number of interacting network stations with a priori unknown parameters, and this is characteristic, for example, of mobile wireless communication networks. One of the tasks of the tools used in designing, planning and optimization of mobile wireless network is to carry out simulation of electromagnetic environment based on mathematical modelling methods, including computer experiment, and to estimate its effect on radio communication devices. This paper proposes the development of a statistical model of electromagnetic environment of a mobile wireless communication network by describing the parameters and factors affecting it including the propagation channel and their statistical models.

Building Facade Study in Lahijan City, Iran: The Impact of Facade's Visual Elements on Historical Image

Buildings are considered as significant part in the cities, which plays main role in organization and arrangement of city appearance, which is affects image of that building facades, as an connective between inner and outer space, have a main role in city image and they are classified as rich image and poor image by people evaluation which related to visual architectural and urban elements in building facades. the buildings in Karimi street , in Lahijan city where, lies in north of Iran, contain the variety of building's facade types which, have made a city image in Historical part of Lahijan city, while reflected the Iranian cities identity. The study attempt to identify the architectural and urban elements that impression the image of building facades in historical area, based on public evaluation. Quantitative method were used and the data was collected through questionnaire survey, the result presented architectural style, color, shape, and design evaluated by people as most important factor which should be understate in future development. in fact, the rich architectural style with strong design make strong city image as weak design make poor city image.

Speech Activated Automation

This article presents a simple way to perform programmed voice commands for the interface with commercial Digital and Analogue Input/Output PCI cards, used in Robotics and Automation applications. Robots and Automation equipment can "listen" to voice commands and perform several different tasks, approaching to the human behavior, and improving the human- machine interfaces for the Automation Industry. Since most PCI Digital and Analogue Input/Output cards are sold with several DLLs included (for use with different programming languages), it is possible to add speech recognition capability, using a standard speech recognition engine, compatible with the programming languages used. It was created in this work a Visual Basic 6 (the world's most popular language) application, that listens to several voice commands, and is capable to communicate directly with several standard 128 Digital I/O PCI Cards, used to control complete Automation Systems, with up to (number of boards used) x 128 Sensors and/or Actuators.

Using Fuzzy Numbers in Heavy Aggregation Operators

We consider different types of aggregation operators such as the heavy ordered weighted averaging (HOWA) operator and the fuzzy ordered weighted averaging (FOWA) operator. We introduce a new extension of the OWA operator called the fuzzy heavy ordered weighted averaging (FHOWA) operator. The main characteristic of this aggregation operator is that it deals with uncertain information represented in the form of fuzzy numbers (FN) in the HOWA operator. We develop the basic concepts of this operator and study some of its properties. We also develop a wide range of families of FHOWA operators such as the fuzzy push up allocation, the fuzzy push down allocation, the fuzzy median allocation and the fuzzy uniform allocation.

Active Cyber Defense within the Concept of NATO’s Protection of Critical Infrastructures

Cyber attacks pose a serious threat to all states. Therefore, states constantly seek for various methods to encounter those threats. In addition, recent changes in the nature of cyber attacks and their more complicated methods have created a new concept: active cyber defense (ACD). This article tries to answer firstly why ACD is important to NATO and find out the viewpoint of NATO towards ACD. Secondly, infrastructure protection is essential to cyber defense. Critical infrastructure protection with ACD means is even more important. It is assumed that by implementing active cyber defense, NATO may not only be able to repel the attacks but also be deterrent. Hence, the use of ACD has a direct positive effect in all international organizations’ future including NATO.

Improve of Evaluation Method for Information Security Levels of CIIP (Critical Information Infrastructure Protection)

As the disfunctions of the information society and social development progress, intrusion problems such as malicious replies, spam mail, private information leakage, phishing, and pharming, and side effects such as the spread of unwholesome information and privacy invasion are becoming serious social problems. Illegal access to information is also becoming a problem as the exchange and sharing of information increases on the basis of the extension of the communication network. On the other hand, as the communication network has been constructed as an international, global system, the legal response against invasion and cyber-attack from abroad is facing its limit. In addition, in an environment where the important infrastructures are managed and controlled on the basis of the information communication network, such problems pose a threat to national security. Countermeasures to such threats are developed and implemented on a yearly basis to protect the major infrastructures of information communication. As a part of such measures, we have developed a methodology for assessing the information protection level which can be used to establish the quantitative object setting method required for the improvement of the information protection level.

Stochastic Resonance in Nonlinear Signal Detection

Stochastic resonance (SR) is a phenomenon whereby the signal transmission or signal processing through certain nonlinear systems can be improved by adding noise. This paper discusses SR in nonlinear signal detection by a simple test statistic, which can be computed from multiple noisy data in a binary decision problem based on a maximum a posteriori probability criterion. The performance of detection is assessed by the probability of detection error Per . When the input signal is subthreshold signal, we establish that benefit from noise can be gained for different noises and confirm further that the subthreshold SR exists in nonlinear signal detection. The efficacy of SR is significantly improved and the minimum of Per can dramatically approach to zero as the sample number increases. These results show the robustness of SR in signal detection and extend the applicability of SR in signal processing.

On the Noise Distance in Robust Fuzzy C-Means

In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.

On Solution of Interval Valued Intuitionistic Fuzzy Assignment Problem Using Similarity Measure and Score Function

The primary objective of the paper is to propose a new method for solving assignment problem under uncertain situation. In the classical assignment problem (AP), zpqdenotes the cost for assigning the qth job to the pth person which is deterministic in nature. Here in some uncertain situation, we have assigned a cost in the form of composite relative degree Fpq instead of  and this replaced cost is in the maximization form. In this paper, it has been solved and validated by the two proposed algorithms, a new mathematical formulation of IVIF assignment problem has been presented where the cost has been considered to be an IVIFN and the membership of elements in the set can be explained by positive and negative evidences. To determine the composite relative degree of similarity of IVIFS the concept of similarity measure and the score function is used for validating the solution which is obtained by Composite relative similarity degree method. Further, hypothetical numeric illusion is conducted to clarify the method’s effectiveness and feasibility developed in the study. Finally, conclusion and suggestion for future work are also proposed.