A Quantitative Tool for Analyze Process Design

Some quality control tools use non metric subjective information coming from experts, who qualify the intensity of relations existing inside processes, but without quantifying them. In this paper we have developed a quality control analytic tool, measuring the impact or strength of the relationship between process operations and product characteristics. The tool includes two models: a qualitative model, allowing relationships description and analysis; and a formal quantitative model, by means of which relationship quantification is achieved. In the first one, concepts from the Graphs Theory were applied to identify those process elements which can be sources of variation, that is, those quality characteristics or operations that have some sort of prelacy over the others and that should become control items. Also the most dependent elements can be identified, that is those elements receiving the effects of elements identified as variation sources. If controls are focused in those dependent elements, efficiency of control is compromised by the fact that we are controlling effects, not causes. The second model applied adapts the multivariate statistical technique of Covariance Structural Analysis. This approach allowed us to quantify the relationships. The computer package LISREL was used to obtain statistics and to validate the model.

A Systematic Method for Performance Analysis of SOA Applications

The successful implementation of Service-Oriented Architecture (SOA) is not confined to Information Technology systems and required changes of the whole enterprise. In order to adapt IT and business, the enterprise requires adequate and measurable methods. The adoption of SOA creates new problem with regard to measuring and analysis the performance. In fact the enterprise should investigate to what extent the development of services will increase the value of business. It is required for every business to measure the extent of SOA adaptation with the goals of enterprise. Moreover, precise performance metrics and their combination with the advanced evaluation methodologies as a solution should be defined. The aim of this paper is to present a systematic methodology for designing a measurement system at the technical and business levels, so that: (1) it will determine measurement metrics precisely (2) the results will be analysed by mapping identified metrics to the measurement tools.

Energy Efficient and Reliable Geographic Routing in Wireless Sensor Networks

The wireless link can be unreliable in realistic wireless sensor networks (WSNs). Energy efficient and reliable data forwarding is important because each node has limited resources. Therefore, we must suggest an optimal solution that considers using the information of the node-s characteristics. Previous routing protocols were unsuited to realistic asymmetric WSNs. In this paper, we propose a Protocol that considers Both sides of Link-quality and Energy (PBLE), an optimal routing protocol that balances modified link-quality, distance and energy. Additionally, we propose a node scheduling method. PBLE achieves a longer lifetime than previous routing protocols and is more energy-efficient. PBLE uses energy, local information and both sides of PRR in a 1-hop distance. We explain how to send data packets to the destination node using the node's information. Simulation shows PBLE improves delivery rate and network lifetime compared to previous schemes. Moreover, we show the improvement in various WSN environments.

Shape Memory alloy Actuator System Optimization for New Hand Prostheses

Shape memory alloy (SMA) actuators have found a wide range of applications due to their unique properties such as high force, small size, lightweight and silent operation. This paper presents the development of compact (SMA) actuator and cooling system in one unit. This actuator is developed for multi-fingered hand. It consists of nickel-titanium (Nitinol) SMA wires in compact forming. The new arrangement insulates SMA wires from the human body by housing it in a heat sink and uses a thermoelectric device for rejecting heat to improve the actuator performance. The study uses optimization methods for selecting the SMA wires geometrical parameters and the material of a heat sink. The experimental work implements the actuator prototype and measures its response.

A Study of the Variability of Very Low Resolution Characters and the Feasibility of Their Discrimination Using Geometrical Features

Current OCR technology does not allow to accurately recognizing small text images, such as those found in web images. Our goal is to investigate new approaches to recognize very low resolution text images containing antialiased character shapes. This paper presents a preliminary study on the variability of such characters and the feasibility to discriminate them by using geometrical features. In a first stage we analyze the distribution of these features. In a second stage we present a study on the discriminative power for recognizing isolated characters, using various rendering methods and font properties. Finally we present interesting results of our evaluation tests leading to our conclusion and future focus.

The Giant Component in a Random Subgraph of a Weak Expander

In this paper, we investigate the appearance of the giant component in random subgraphs G(p) of a given large finite graph family Gn = (Vn, En) in which each edge is present independently with probability p. We show that if the graph Gn satisfies a weak isoperimetric inequality and has bounded degree, then the probability p under which G(p) has a giant component of linear order with some constant probability is bounded away from zero and one. In addition, we prove the probability of abnormally large order of the giant component decays exponentially. When a contact graph is modeled as Gn, our result is of special interest in the study of the spread of infectious diseases or the identification of community in various social networks.

Adaptive Kernel Filtering Used in Video Processing

In this paper we present a noise reduction filter for video processing. It is based on the recently proposed two dimensional steering kernel, extended to three dimensions and further augmented to suit the spatial-temporal domain of video processing. Two alternative filters are proposed - the time symmetric kernel and the time asymmetric kernel. The first reduces the noise on single sequences, but to handle the problems at scene shift the asymmetric kernel is introduced. The performance of both are tested on simulated data and on a real video sequence together with the existing steering kernel. The proposed kernels improves the Rooted Mean Squared Error (RMSE) compared to the original steering kernel method on video material.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Multi-criteria Optimization of Square Beam using Linear Weighted Average Model

Increasing energy absorption is a significant parameter in vehicle design. Absorbing more energy results in decreasing occupant damage. Limitation of the deflection in a side impact results in decreased energy absorption (SEA) and increased peak load (PL). Hence a high crash force jeopardizes passenger safety and vehicle integrity. The aims of this paper are to determine suitable dimensions and material of a square beam subjected to side impact, in order to maximize SEA and minimize PL. To achieve this novel goal, the geometric parameters of a square beam are optimized using the response surface method (RSM).multi-objective optimization is performed, and the optimum design for different response features is obtained.

Position Based Routing Protocol with More Reliability in Mobile Ad Hoc Network

Position based routing protocols are the kinds of routing protocols, which they use of nodes location information, instead of links information to routing. In position based routing protocols, it supposed that the packet source node has position information of itself and it's neighbors and packet destination node. Greedy is a very important position based routing protocol. In one of it's kinds, named MFR (Most Forward Within Radius), source node or packet forwarder node, sends packet to one of it's neighbors with most forward progress towards destination node (closest neighbor to destination). Using distance deciding metric in Greedy to forward packet to a neighbor node, is not suitable for all conditions. If closest neighbor to destination node, has high speed, in comparison with source node or intermediate packet forwarder node speed or has very low remained battery power, then packet loss probability is increased. Proposed strategy uses combination of metrics distancevelocity similarity-power, to deciding about giving the packet to which neighbor. Simulation results show that the proposed strategy has lower lost packets average than Greedy, so it has more reliability.

A Current-mode Continuous-time Sigma-delta Modulator based on Translinear Loop Principle

In this paper, a new approach for design of a fully differential second order current mode continuous-time sigma-delta modulator is presented. For circuit implementation, square root domain (SRD) translinear loop based on floating-gate MOS transistors that operate in saturation region is employed. The modulator features, low supply voltage, low power consumption (8mW) and high dynamic range (55dB). Simulation results confirm that this design is suitable for data converters.

Flow and Heat Transfer Mechanism Analysis in Outward Convex Asymmetrical Corrugated Tubes

The flow and heat transfer mechanism in convex corrugated tubes have been investigated through numerical simulations in this paper. Two kinds of tube types named as symmetric corrugated tube (SCT) and asymmetric corrugated tube (ACT) are modeled and studied numerically based on the RST model. The predictive capability of RST model is examined in the corrugation wall in order to check the reliability of RST model under the corrugation wall condition. We propose a comparison between the RST modelling the corrugation wall with existing direct numerical simulation of Maaß C and Schumann U [14]. The numerical results pressure coefficient at different profiles between RST and DNS are well matched. The influences of large corrugation tough radii to heat transfer and flow characteristic had been considered. Flow and heat transfer comparison between SCT and ACT had been discussed. The numerical results show that ACT exhibits higher overall heat transfer performance than SCT.

Feature Extraction of Dorsal Hand Vein Pattern Using a Fast Modified PCA Algorithm Based On Cholesky Decomposition and Lanczos Technique

Dorsal hand vein pattern is an emerging biometric which is attracting the attention of researchers, of late. Research is being carried out on existing techniques in the hope of improving them or finding more efficient ones. In this work, Principle Component Analysis (PCA) , which is a successful method, originally applied on face biometric is being modified using Cholesky decomposition and Lanczos algorithm to extract the dorsal hand vein features. This modified technique decreases the number of computation and hence decreases the processing time. The eigenveins were successfully computed and projected onto the vein space. The system was tested on a database of 200 images and using a threshold value of 0.9 to obtain the False Acceptance Rate (FAR) and False Rejection Rate (FRR). This modified algorithm is desirable when developing biometric security system since it significantly decreases the matching time.

Hand Vein Image Enhancement With Radon Like Features Descriptor

Nowadays, hand vein recognition has attracted more attentions in identification biometrics systems. Generally, hand vein image is acquired with low contrast and irregular illumination. Accordingly, if you have a good preprocessing of hand vein image, we can easy extracted the feature extraction even with simple binarization. In this paper, a proposed approach is processed to improve the quality of hand vein image. First, a brief survey on existing methods of enhancement is investigated. Then a Radon Like features method is applied to preprocessing hand vein image. Finally, experiments results show that the proposed method give the better effective and reliable in improving hand vein images.

Evaluation of the Zero Sequence Impedance of Overhead High Voltage Lines

As known, the guard wires of overhead high voltage are usually grounded through the grounding systems of support and of the terminal stations. They do affect the zero sequence impedance value of the line, Z0, which is generally, calculated assuming that the wires guard are at ground potential. In this way it is not considered the effect of the resistances of earth of supports and stations. In this work is formed a formula for the calculation of Z0 which takes account of said resistances. Is also proposed a method of calculating the impedance zero sequence overhead lines in which, in various sections or spans, the guard wires are connected to the supports, or isolated from them, or are absent. Parametric analysis is given for lines 220 kV and 400 kV, which shows the extent of the errors made with traditional methods of calculation.

A Novel Metric for Performance Evaluation of Image Fusion Algorithms

In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.

Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video

Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.

Reliable Capacitated Facility Location Problem Considering Maximal Covering

This paper provides a framework in order to incorporate reliability issue as a sign of disruption in distribution systems and partial covering theory as a response to limitation in coverage radios and economical preferences, simultaneously into the traditional literatures of capacitated facility location problems. As a result we develop a bi-objective model based on the discrete scenarios for expected cost minimization and demands coverage maximization through a three echelon supply chain network by facilitating multi-capacity levels for provider side layers and imposing gradual coverage function for distribution centers (DCs). Additionally, in spite of objectives aggregation for solving the model through LINGO software, a branch of LP-Metric method called Min- Max approach is proposed and different aspects of corresponds model will be explored.

Experimental Investigation of the Effect of Hydrogen Manifold Injection on the Performance of Compression Ignition Engines

Experiments were carried out to evaluate the influence of the addition of hydrogen to the inlet air on the performance of a single cylinder direct injection diesel engine. Hydrogen was injected in the inlet manifold. The addition of hydrogen was done on energy replacement basis. It was found that the addition of hydrogen improves the combustion process due to superior combustion characteristics of hydrogen in comparison to conventional diesel fuels. It was also found that 10% energy replacement improves the engine thermal efficiency by about 40% and reduces the sfc by about 35% however the volumetric efficiency was reduced by about 35%.

Numerical Optimization within Vector of Parameters Estimation in Volatility Models

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).