A Cross-Layer Approach for Cooperative MIMO Multi-hop Wireless Sensor Networks

In this work, we study the problem of determining the minimum scheduling length that can satisfy end-to-end (ETE) traffic demand in scheduling-based multihop WSNs with cooperative multiple-input multiple-output (MIMO) transmission scheme. Specifically, we present a cross-layer formulation for the joint routing, scheduling and stream control problem by incorporating various power and rate adaptation schemes, and taking into account an antenna beam pattern model and the signal-to-interference-and-noise (SINR) constraint at the receiver. In the context, we also propose column generation (CG) solutions to get rid of the complexity requiring the enumeration of all possible sets of scheduling links.

Chaotic Properties of Hemodynamic Responsein Functional Near Infrared Spectroscopic Measurement of Brain Activity

Functional near infrared spectroscopy (fNIRS) is a practical non-invasive optical technique to detect characteristic of hemoglobin density dynamics response during functional activation of the cerebral cortex. In this paper, fNIRS measurements were made in the area of motor cortex from C4 position according to international 10-20 system. Three subjects, aged 23 - 30 years, were participated in the experiment. The aim of this paper was to evaluate the effects of different motor activation tasks of the hemoglobin density dynamics of fNIRS signal. The chaotic concept based on deterministic dynamics is an important feature in biological signal analysis. This paper employs the chaotic properties which is a novel method of nonlinear analysis, to analyze and to quantify the chaotic property in the time series of the hemoglobin dynamics of the various motor imagery tasks of fNIRS signal. Usually, hemoglobin density in the human brain cortex is found to change slowly in time. An inevitable noise caused by various factors is to be included in a signal. So, principle component analysis method (PCA) is utilized to remove high frequency component. The phase pace is reconstructed and evaluated the Lyapunov spectrum, and Lyapunov dimensions. From the experimental results, it can be conclude that the signals measured by fNIRS are chaotic.

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

A New Method Presentation for Fault Location in Power Transformers

Power transformers are among the most important and expensive equipments in the electric power systems. Consequently the transformer protection is an essential part of the system protection. This paper presents a new method for locating transformer winding faults such as turn-to-turn, turn-to-core, turn-totransformer body, turn-to-earth, and high voltage winding to low voltage winding. In this study the current and voltage signals of input and output terminals of the transformer are measured, which the Fourier transform of measured signals and harmonic analysis determine the fault's location.

Acceptance of Consumer on Various Tempeh and Protein Content Comparison

This research aims to study consumer acceptance of Tempeh from various raw materials (type of bean) and determine protein contents for comparison. Tempeh made from soybean, peanut, white kidney bean and sesame in the ratio: - soybean:sesame =1:0.1, soybean:white kidney:sesame =1:1:0.1, soybean:peanut:sesame =1:1:0.1 and peanut:white kidney bean: sesame =1:1:0.1. The study found that consumer is most satisfied with appearances on soybean mixed with white kidney and black sesame tempeh (3.98). The most satisfied tempeh with textures is soybean mixed with peanut and black sesame tempeh (4.00). The most satisfied tempeh with odor is peanut mixed with white kidney bean and black sesame tempeh (4.04). And the most satisfied tempeh with flavor is peanut mixed with white kidney bean and black sesame tempeh (4.2). The amount of protein in production, soybean tempeh has the highest protein. When we add sesame seeds, it made the protein content slightly decreased (1.86 and 0.6 %). When we use peanut as raw material, the protein content decreased 15.3%. And when we use white kidney bean as raw material, the protein content decreased (22.77- 26.11%). 

Modeling of Reusability of Object Oriented Software System

Automatic reusability appraisal is helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this research work, structural attributes of software components are explored using software metrics and quality of the software is inferred by different Neural Network based approaches, taking the metric values as input. The calculated reusability value enables to identify a good quality code automatically. It is found that the reusability value determined is close to the manual analysis used to be performed by the programmers or repository managers. So, the developed system can be used to enhance the productivity and quality of software development.

Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC)

Wireless mesh networks based on IEEE 802.11 technology are a scalable and efficient solution for next generation wireless networking to provide wide-area wideband internet access to a significant number of users. The deployment of these wireless mesh networks may be within different authorities and without any planning, they are potentially overlapped partially or completely in the same service area. The aim of the proposed model is design a new model to Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC), the unplanned deployment of WMNs are determinates there performance. We use throughput optimization approach to model the unplanned WMNs deployment problem based on partitioning hierarchical cluster (PHC) based architecture, in this paper the researcher used bridge node by allowing interworking traffic between these WMNs as solution for performance degradation.

Enthalpies of Dissociation of Pure Methane and Carbon Dioxide Gas Hydrate

In this study the enthalpies of dissociation for pure methane and pure carbon dioxide was calculated using a hydrate equilibrium data obtained in this study. The enthalpy of dissociation was determined using Clausius-Clapeyron equation. The results were compared with the values reported in literature obtained using various techniques.

Influence of Apo E Polymorphism on Coronary Artery Disease

The ε4 allele of the ε2, ε3 and ε4 protein isoform polymorphism in the gene encoding apolipoprotein E (Apo E) has previously been associated with increased cardiac artery disease (CAD); therefore to investigate the significance of this polymorphism in pathogenesis of CAD in Iranian patients with stenosis and control subjects. To investigate the association between  Apo E polymorphism and coronary artery disease we performed a comparative case control study of the frequency of Apo E  polymorphism in One hundred CAD patients with stenosis who underwent coronary angiography (>50% stenosis) and 100 control subjects (

Wireless Distributed Load-Shedding Management System for Non-Emergency Cases

In this paper, we present a cost-effective wireless distributed load shedding system for non-emergency scenarios. In power transformer locations where SCADA system cannot be used, the proposed solution provides a reasonable alternative that combines the use of microcontrollers and existing GSM infrastructure to send early warning SMS messages to users advising them to proactively reduce their power consumption before system capacity is reached and systematic power shutdown takes place. A novel communication protocol and message set have been devised to handle the messaging between the transformer sites, where the microcontrollers are located and where the measurements take place, and the central processing site where the database server is hosted. Moreover, the system sends warning messages to the endusers mobile devices that are used as communication terminals. The system has been implemented and tested via different experimental results.

The Problem of Using the Calculation of the Critical Path to Solver Instances of the Job Shop Scheduling Problem

A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph. This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the calculation of the critical path in any digraph is of polynomial complexity.

Visual Object Tracking in 3D with Color Based Particle Filter

This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.

The Incorporation of In in GaAsN as a Means of N Fraction Calibration

InGaAsN and GaAsN epitaxial layers with similar nitrogen compositions in a sample were successfully grown on a GaAs (001) substrate by solid source molecular beam epitaxy. An electron cyclotron resonance nitrogen plasma source has been used to generate atomic nitrogen during the growth of the nitride layers. The indium composition changed from sample to sample to give compressive and tensile strained InGaAsN layers. Layer characteristics have been assessed by high-resolution x-ray diffraction to determine the relationship between the lattice constant of the GaAs1-yNy layer and the fraction x of In. The objective was to determine the In fraction x in an InxGa1-xAs1-yNy epitaxial layer which exactly cancels the strain present in a GaAs1-yNy epitaxial layer with the same nitrogen content when grown on a GaAs substrate.

Objective Assessment of Psoriasis Lesion Thickness for PASI Scoring using 3D Digital Imaging

Psoriasis is a chronic inflammatory skin condition which affects 2-3% of population around the world. Psoriasis Area and Severity Index (PASI) is a gold standard to assess psoriasis severity as well as the treatment efficacy. Although a gold standard, PASI is rarely used because it is tedious and complex. In practice, PASI score is determined subjectively by dermatologists, therefore inter and intra variations of assessment are possible to happen even among expert dermatologists. This research develops an algorithm to assess psoriasis lesion for PASI scoring objectively. Focus of this research is thickness assessment as one of PASI four parameters beside area, erythema and scaliness. Psoriasis lesion thickness is measured by averaging the total elevation from lesion base to lesion surface. Thickness values of 122 3D images taken from 39 patients are grouped into 4 PASI thickness score using K-means clustering. Validation on lesion base construction is performed using twelve body curvature models and show good result with coefficient of determinant (R2) is equal to 1.

Extensiveness and Effectiveness of Corporate Governance Regulations in South-Eastern Europe

The purpose of the article is to illustrate the main characteristics of the corporate governance challenge facing the countries of South-Eastern Europe (SEE) and to subsequently determine and assess the extensiveness and effectiveness of corporate governance regulations in these countries. Therefore, we start with an overview on the subject of the key problems of corporate governance in transition. We then address the issue of corporate governance measurement for SEE countries. To this end, we include a review of the methodological framework for determining both the extensiveness and the effectiveness of corporate governance legislation. We then focus on the actual analysis of the quality of corporate governance codes, as well as of legal institutions effectiveness and provide a measure of corporate governance in Romania and other SEE emerging markets. The paper concludes by emphasizing the corporate governance enforcement gap and by identifying research issues that require further study.

A Novel Multiplex Real-Time PCR Assay Using TaqMan MGB Probes for Rapid Detection of Trisomy 21

Cytogenetic analysis still remains the gold standard method for prenatal diagnosis of trisomy 21 (Down syndrome, DS). Nevertheless, the conventional cytogenetic analysis needs live cultured cells and is too time-consuming for clinical application. In contrast, molecular methods such as FISH, QF-PCR, MLPA and quantitative Real-time PCR are rapid assays with results available in 24h. In the present study, we have successfully used a novel MGB TaqMan probe-based real time PCR assay for rapid diagnosis of trisomy 21 status in Down syndrome samples. We have also compared the results of this molecular method with corresponding results obtained by the cytogenetic analysis. Blood samples obtained from DS patients (n=25) and normal controls (n=20) were tested by quantitative Real-time PCR in parallel to standard G-banding analysis. Genomic DNA was extracted from peripheral blood lymphocytes. A high precision TaqMan probe quantitative Real-time PCR assay was developed to determine the gene dosage of DSCAM (target gene on 21q22.2) relative to PMP22 (reference gene on 17p11.2). The DSCAM/PMP22 ratio was calculated according to the formula; ratio=2 -ΔΔCT. The quantitative Real-time PCR was able to distinguish between trisomy 21 samples and normal controls with the gene ratios of 1.49±0.13 and 1.03±0.04 respectively (p value

Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

A Hybrid Metaheuristic Framework for Evolving the PROAFTN Classifier

In this paper, a new learning algorithm based on a hybrid metaheuristic integrating Differential Evolution (DE) and Reduced Variable Neighborhood Search (RVNS) is introduced to train the classification method PROAFTN. To apply PROAFTN, values of several parameters need to be determined prior to classification. These parameters include boundaries of intervals and relative weights for each attribute. Based on these requirements, the hybrid approach, named DEPRO-RVNS, is presented in this study. In some cases, the major problem when applying DE to some classification problems was the premature convergence of some individuals to local optima. To eliminate this shortcoming and to improve the exploration and exploitation capabilities of DE, such individuals were set to iteratively re-explored using RVNS. Based on the generated results on both training and testing data, it is shown that the performance of PROAFTN is significantly improved. Furthermore, the experimental study shows that DEPRO-RVNS outperforms well-known machine learning classifiers in a variety of problems.

Development of a Catchment Water Quality Model for Continuous Simulations of Pollutants Build-up and Wash-off

Estimation of runoff water quality parameters is required to determine appropriate water quality management options. Various models are used to estimate runoff water quality parameters. However, most models provide event-based estimates of water quality parameters for specific sites. The work presented in this paper describes the development of a model that continuously simulates the accumulation and wash-off of water quality pollutants in a catchment. The model allows estimation of pollutants build-up during dry periods and pollutants wash-off during storm events. The model was developed by integrating two individual models; rainfall-runoff model, and catchment water quality model. The rainfall-runoff model is based on the time-area runoff estimation method. The model allows users to estimate the time of concentration using a range of established methods. The model also allows estimation of the continuing runoff losses using any of the available estimation methods (i.e., constant, linearly varying or exponentially varying). Pollutants build-up in a catchment was represented by one of three pre-defined functions; power, exponential, or saturation. Similarly, pollutants wash-off was represented by one of three different functions; power, rating-curve, or exponential. The developed runoff water quality model was set-up to simulate the build-up and wash-off of total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The application of the model was demonstrated using available runoff and TSS field data from road and roof surfaces in the Gold Coast, Australia. The model provided excellent representation of the field data demonstrating the simplicity yet effectiveness of the proposed model.

Information Security Risk in Financial Institutions

The history of technology and banking is examined as it relates to risk and technological determinism. It is proposed that the services that banks offer are determined by technology and that banks must adopt new technologies to be competitive. The adoption of technologies paradoxically forces the adoption of other new technologies to protect the bank from the increased risk of technology. This cycle will lead to bank examiners and regulators to focus on human behavior, not on the ever changing technology.