An Efficient Energy Adaptive Hybrid Error Correction Technique for Underwater Wireless Sensor Networks

Variable channel conditions in underwater networks, and variable distances between sensors due to water current, leads to variable bit error rate (BER). This variability in BER has great effects on energy efficiency of error correction techniques used. In this paper an efficient energy adaptive hybrid error correction technique (AHECT) is proposed. AHECT adaptively changes error technique from pure retransmission (ARQ) in a low BER case to a hybrid technique with variable encoding rates (ARQ & FEC) in a high BER cases. An adaptation algorithm depends on a precalculated packet acceptance rate (PAR) look-up table, current BER, packet size and error correction technique used is proposed. Based on this adaptation algorithm a periodically 3-bit feedback is added to the acknowledgment packet to state which error correction technique is suitable for the current channel conditions and distance. Comparative studies were done between this technique and other techniques, and the results show that AHECT is more energy efficient and has high probability of success than all those techniques.

A Survey on Principal Aspects of Secure Image Transmission

This paper is a review on the aspects and approaches of design an image cryptosystem. First a general introduction given for cryptography and images encryption and followed by different techniques in image encryption and related works for each technique surveyed. Finally, general security analysis methods for encrypted images are mentioned.

Application of Artificial Intelligence Techniques for Dissolved Gas Analysis of Transformers-A Review

The gases generated in oil filled transformers can be used for qualitative determination of incipient faults. The Dissolved Gas Analysis has been widely used by utilities throughout the world as the primarily diagnostic tool for transformer maintenance. In this paper, various Artificial Intelligence Techniques that have been used by the researchers in the past have been reviewed, some conclusions have been drawn and a sequential hybrid system has been proposed. The synergy of ANN and FIS can be a good solution for reliable results for predicting faults because one should not rely on a single technology when dealing with real–life applications.

A Systematic Review for the Latest Development in Requirement Engineering

Requirement engineering has been the subject of large volume of researches due to the significant role it plays in the software development life cycle. However, dynamicity of software industry is much faster than advances in requirements engineering approaches. Therefore, this paper aims to systematically review and evaluate the current research in requirement engineering and identify new research trends and direction in this field. In addition, various research methods associated with the Evaluation-based techniques and empirical study are highlighted for the requirements engineering field. Finally, challenges and recommendations on future directions research are presented based on the research team observations during this study.

Agent-based Framework for Energy Efficiency in Wireless Sensor Networks

Wireless sensor networks are consisted of hundreds or thousands of small sensors that have limited resources. Energy-efficient techniques are the main issue of wireless sensor networks. This paper proposes an energy efficient agent-based framework in wireless sensor networks. We adopt biologically inspired approaches for wireless sensor networks. Agent operates automatically with their behavior policies as a gene. Agent aggregates other agents to reduce communication and gives high priority to nodes that have enough energy to communicate. Agent behavior policies are optimized by genetic operation at the base station. Simulation results show that our proposed framework increases the lifetime of each node. Each agent selects a next-hop node with neighbor information and behavior policies. Our proposed framework provides self-healing, self-configuration, self-optimization properties to sensor nodes.

Analytical and Finite Element Analysis of Hydroforming Deep Drawing Process

This paper gives an overview of a deep drawing process by pressurized liquid medium separated from the sheet by a rubber diaphragm. Hydroforming deep drawing processing of sheet metal parts provides a number of advantages over conventional techniques. It generally increases the depth to diameter ratio possible in cup drawing and minimizes the thickness variation of the drawn cup. To explore the deformation mechanism, analytical and numerical simulations are used for analyzing the drawing process of an AA6061-T4 blank. The effects of key process parameters such as coefficient of friction, initial thickness of the blank and radius between cup wall and flange are investigated analytically and numerically. The simulated results were in good agreement with the results of the analytical model. According to finite element simulations, the hydroforming deep drawing method provides a more uniform thickness distribution compared to conventional deep drawing and decreases the risk of tearing during the process.

Development of New Control Techniques for Vibration Isolation of Structures using Smart Materials

In this paper, the effects of the restoring force device on the response of a space frame structure resting on sliding type of bearing with a restoring force device is studied. The NS component of the El - Centro earthquake and harmonic ground acceleration is considered for earthquake excitation. The structure is modeled by considering six-degrees of freedom (three translations and three rotations) at each node. The sliding support is modeled as a fictitious spring with two horizontal degrees of freedom. The response quantities considered for the study are the top floor acceleration, base shear, bending moment and base displacement. It is concluded from the study that the displacement of the structure reduces by the use of the restoring force device. Also, the peak values of acceleration, bending moment and base shear also decreases. The simulation results show the effectiveness of the developed and proposed method.

Concepts for Designing Low Power Wireless Sensor Network

Wireless sensor networks have been used in wide areas of application and become an attractive area for researchers in recent years. Because of the limited energy storage capability of sensor nodes, Energy consumption is one of the most challenging aspects of these networks and different strategies and protocols deals with this area. This paper presents general methods for designing low power wireless sensor network. Different sources of energy consumptions in these networks are discussed here and techniques for alleviating the consumption of energy are presented.

Predictive Clustering Hybrid Regression(pCHR) Approach and Its Application to Sucrose-Based Biohydrogen Production

A predictive clustering hybrid regression (pCHR) approach was developed and evaluated using dataset from H2- producing sucrose-based bioreactor operated for 15 months. The aim was to model and predict the H2-production rate using information available about envirome and metabolome of the bioprocess. Selforganizing maps (SOM) and Sammon map were used to visualize the dataset and to identify main metabolic patterns and clusters in bioprocess data. Three metabolic clusters: acetate coupled with other metabolites, butyrate only, and transition phases were detected. The developed pCHR model combines principles of k-means clustering, kNN classification and regression techniques. The model performed well in modeling and predicting the H2-production rate with mean square error values of 0.0014 and 0.0032, respectively.

Computation of Probability Coefficients using Binary Decision Diagram and their Application in Test Vector Generation

This paper deals with efficient computation of probability coefficients which offers computational simplicity as compared to spectral coefficients. It eliminates the need of inner product evaluations in determination of signature of a combinational circuit realizing given Boolean function. The method for computation of probability coefficients using transform matrix, fast transform method and using BDD is given. Theoretical relations for achievable computational advantage in terms of required additions in computing all 2n probability coefficients of n variable function have been developed. It is shown that for n ≥ 5, only 50% additions are needed to compute all probability coefficients as compared to spectral coefficients. The fault detection techniques based on spectral signature can be used with probability signature also to offer computational advantage.

A Study on Remote On-Line Diagnostic System for Vehicles by Integrating the Technology of OBD, GPS, and 3G

This paper presents a remote on-line diagnostic system for vehicles via the use of On-Board Diagnostic (OBD), GPS, and 3G techniques. The main parts of the proposed system are on-board computer, vehicle monitor server, and vehicle status browser. First, the on-board computer can obtain the location of deriver and vehicle status from GPS receiver and OBD interface, respectively. Then on-board computer will connect with the vehicle monitor server through 3G network to transmit the real time vehicle system status. Finally, vehicle status browser could show the remote vehicle status including vehicle speed, engine rpm, battery voltage, engine coolant temperature, and diagnostic trouble codes. According to the experimental results, the proposed system can help fleet managers and car knockers to understand the remote vehicle status. Therefore this system can decrease the time of fleet management and vehicle repair due to the fleet managers and car knockers who find the diagnostic trouble messages in time.

A K-Means Based Clustering Approach for Finding Faulty Modules in Open Source Software Systems

Prediction of fault-prone modules provides one way to support software quality engineering. Clustering is used to determine the intrinsic grouping in a set of unlabeled data. Among various clustering techniques available in literature K-Means clustering approach is most widely being used. This paper introduces K-Means based Clustering approach for software finding the fault proneness of the Object-Oriented systems. The contribution of this paper is that it has used Metric values of JEdit open source software for generation of the rules for the categorization of software modules in the categories of Faulty and non faulty modules and thereafter empirically validation is performed. The results are measured in terms of accuracy of prediction, probability of Detection and Probability of False Alarms.

Three Computational Mathematics Techniques: Comparative Determination of Area under Curve

The objective of this manuscript is to find area under the plasma concentration- time curve (AUC) for multiple doses of salbutamol sulphate sustained release tablets (Ventolin® oral tablets SR 8 mg, GSK, Pakistan) in the group of 18 healthy adults by using computational mathematics techniques. Following the administration of 4 doses of Ventolin® tablets 12 hourly to 24 healthy human subjects and bioanalysis of obtained plasma samples, plasma drug concentration-time profile was constructed. AUC, an important pharmacokinetic parameter, was measured using integrated equation of multiple oral dose regimens. The approximated AUC was also calculated by using computational mathematics techniques such as repeated rectangular, repeated trapezium and repeated Simpson's rule and compared with exact value of AUC calculated by using integrated equation of multiple oral dose regimens to find best computational mathematics method that gives AUC values closest to exact. The exact values of AUC for four consecutive doses of Ventolin® oral tablets were 150.5819473, 157.8131756, 164.4178231 and 162.78 ng.h/ml while the closest values approximated AUC values were 149.245962, 157.336171, 164.2585768 and 162.289224 ng.h/ml, respectively as found by repeated rectangular rule. The errors in the approximated values of AUC were negligible. It is concluded that all computational tools approximated values of AUC accurately but the repeated rectangular rule gives slightly better approximated values of AUC as compared to repeated trapezium and repeated Simpson's rules.

Langmuir–Blodgett Films of Polyaniline for Efficient Detection of Uric Acid

Langmuir–Blodgett (LB) films of polyaniline (PANI) grown onto ITO coated glass substrates were utilized for the fabrication of Uric acid biosensor for efficient detection of uric acid by immobilizing Uricase via EDC–NHS coupling. The modified electrodes were characterized by atomic force microscopy (AFM). The response characteristics after immobilization of uricase were studied using cyclic voltammetry and electrochemical impedance spectroscopy techniques. The uricase/PANI/ITO/glass bioelectrode studied by CV and EIS techniques revealed detection of uric acid in a wide range of 0.05 mM to 1.0 mM, covering the physiological range in blood. A low Michaelis–Menten constant (Km) of 0.21 mM indicates the higher affinity of immobilized Uricase towards its analyte (uric acid). The fabricated uric acid biosensor based on PANI LB films exhibits excellent sensitivity of 0.21 mA/mM with a response time of 4 s, good reproducibility, long shelf life (8 weeks) and high selectivity.

Support Vector Machine based Intelligent Watermark Decoding for Anticipated Attack

In this paper, we present an innovative scheme of blindly extracting message bits from an image distorted by an attack. Support Vector Machine (SVM) is used to nonlinearly classify the bits of the embedded message. Traditionally, a hard decoder is used with the assumption that the underlying modeling of the Discrete Cosine Transform (DCT) coefficients does not appreciably change. In case of an attack, the distribution of the image coefficients is heavily altered. The distribution of the sufficient statistics at the receiving end corresponding to the antipodal signals overlap and a simple hard decoder fails to classify them properly. We are considering message retrieval of antipodal signal as a binary classification problem. Machine learning techniques like SVM is used to retrieve the message, when certain specific class of attacks is most probable. In order to validate SVM based decoding scheme, we have taken Gaussian noise as a test case. We generate a data set using 125 images and 25 different keys. Polynomial kernel of SVM has achieved 100 percent accuracy on test data.

Forward Kinematics Analysis of a 3-PRS Parallel Manipulator

In this article the homotopy continuation method (HCM) to solve the forward kinematic problem of the 3-PRS parallel manipulator is used. Since there are many difficulties in solving the system of nonlinear equations in kinematics of manipulators, the numerical solutions like Newton-Raphson are inevitably used. When dealing with any numerical solution, there are two troublesome problems. One is that good initial guesses are not easy to detect and another is related to whether the used method will converge to useful solutions. Results of this paper reveal that the homotopy continuation method can alleviate the drawbacks of traditional numerical techniques.

Making Businesses Work Smarter with Mobile Business Intelligence

Through the course of this paper we outline how mobile Business Intelligence (m-BI) can help businesses to work smarter and to improve their agility. When we analyze the industry from the usage perspective or how interaction with the enterprise BI system happens via mobile devices, we may easily understand that there are two major types of mobile BI: passive and active. Active mobile BI gives provisions for users to interact with the BI systems on-the-fly. Active mobile business intelligence often works as a combination of both “push and pull" techniques. Some mistakes were done in the up-to-day progress of mobile technologies and mobile BI, as well as some problems that still have to be resolved. We discussed in the paper rather broadly.

Optimal Risk Reduction in the Railway Industry by Using Dynamic Programming

The paper suggests for the first time the use of dynamic programming techniques for optimal risk reduction in the railway industry. It is shown that by using the concept ‘amount of removed risk by a risk reduction option’, the problem related to optimal allocation of a fixed budget to achieve a maximum risk reduction in the railway industry can be reduced to an optimisation problem from dynamic programming. For n risk reduction options and size of the available risk reduction budget B (expressed as integer number), the worst-case running time of the proposed algorithm is O (n x (B+1)), which makes the proposed method a very efficient tool for solving the optimal risk reduction problem in the railway industry. 

Magnesium Waste Evaluation in Moderate Temperature (70oC) Magnesium Borate Synthesis

Waste problem is becoming a future problem all over the world. Magnesium wastes which can be used in recycling processes are produced by many industrial activities. Magnesium borates which have useful properties such as; high heat resistance, corrosion resistance, supermechanical strength, superinsulation, light weight, high coefficient of elasticity and so on. Addition, magnesium borates have great potential in the development of ceramic and detergents industry, whisker-reinforced composites, antiwear, and reducing friction additives. In this study, using the starting materials of waste magnesium and H3BO3 the hydrothermal method was applied at a moderate temperature of 70oC with different reaction times. Several reaction times of waste magnesium to H3BO3 were selected as; 30, 60, 120, 240 minutes. After the synthesis, X-Ray Diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) techniques were applied to products. As a result, the forms of Admontite [MgO(B2O3)3.7(H2O)] and Mcallisterite [Mg2(B6O7(OH)6)2.9(H2O)] were synthesized.

The Use of S Curves in Technology Forecasting and its Application On 3D TV Technology

S-Curves are commonly used in technology forecasting. They show the paths of product performance in relation to time or investment in R&D. It is a useful tool to describe the inflection points and the limit of improvement of a technology. Companies use this information to base their innovation strategies. However inadequate use and some limitations of this technique lead to problems in decision making. In this paper first technology forecasting and its importance for company level strategies will be discussed. Secondly the S-Curve and its place among other forecasting techniques will be introduced. Thirdly its use in technology forecasting will be discussed based on its advantages, disadvantages and limitations. Finally an application of S-curve on 3D TV technology using patent data will also be presented and the results will be discussed.