A New Recognition Scheme for Machine- Printed Arabic Texts based on Neural Networks

This paper presents a new approach to tackle the problem of recognizing machine-printed Arabic texts. Because of the difficulty of recognizing cursive Arabic words, the text has to be normalized and segmented to be ready for the recognition stage. The new scheme for recognizing Arabic characters depends on multiple parallel neural networks classifier. The classifier has two phases. The first phase categories the input character into one of eight groups. The second phase classifies the character into one of the Arabic character classes in the group. The system achieved high recognition rate.

An Enhanced Artificial Neural Network for Air Temperature Prediction

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. An improved model for temperature prediction in Georgia was developed by including information on seasonality and modifying parameters of an existing artificial neural network model. Alternative models were compared by instantiating and training multiple networks for each model. The inclusion of up to 24 hours of prior weather information and inputs reflecting the day of year were among improvements that reduced average four-hour prediction error by 0.18°C compared to the prior model. Results strongly suggest model developers should instantiate and train multiple networks with different initial weights to establish appropriate model parameters.

Layered Multiple Description Coding For Robust Video Transmission Over Wireless Ad-Hoc Networks

This paper presents a video transmission system using layered multiple description (coding (MDC) and multi-path transport for reliable video communications in wireless ad-hoc networks. The proposed MDC extends a quality-scalable H.264/AVC video coding algorithm to generate two independent descriptions. The two descriptions are transmitted over different paths to a receiver in order to alleviate the effect of unstable channel conditions of wireless adhoc networks. If one description is lost due to transmission erros, then the correctly received description is used to estimate the lost information of the corrupted description. The proposed MD coder maintains an adequate video quality as long as both description are not simultaneously lost. Simulation results show that the proposed MD coding combined with multi-path transport system is largely immune to packet losses, and therefore, can be a promising solution for robust video communications over wireless ad-hoc networks.

GridNtru: High Performance PKCS

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Admission Control Approaches in the IMS Presence Service

In this research, we propose a weighted class based queuing (WCBQ) mechanism to provide class differentiation and to reduce the load for the IMS (IP Multimedia Subsystem) presence server (PS). The tasks of admission controller for the PS are demonstrated. Analysis and simulation models are developed to quantify the performance of WCBQ scheme. An optimized dropping time frame has been developed based on which some of the preexisting messages are dropped from the PS-buffer. Cost functions are developed and simulation comparison has been performed with FCFS (First Come First Served) scheme. The results show that the PS benefits significantly from the proposed queuing and dropping algorithm (WCBQ) during heavy traffic.

Evaluation of Multilevel Modulation Formats for 100Gbps Transmission with Direct Detection

This paper evaluate the multilevel modulation for different techniques such as amplitude shift keying (M-ASK), MASK, differential phase shift keying (M-ASK-Bipolar), Quaternary Amplitude Shift Keying (QASK) and Quaternary Polarization-ASK (QPol-ASK) at a total bit rate of 107 Gbps. The aim is to find a costeffective very high speed transport solution. Numerical investigation was performed using Monte Carlo simulations. The obtained results indicate that some modulation formats can be operated at 100Gbps in optical communication systems with low implementation effort and high spectral efficiency.

Regional Stability Analysis of Rotor-Ball Bearing and Rotor- Roller Bearing Systems Considering Switching Phenomena

In this study the regional stability of a rotor system which is supported on rolling bearings with radial clearance is studied. The rotor is assumed to be rigid. Due to radial clearance of bearings and dynamic configuration of system, each rolling elements of bearings has the possibility to be in contact with both of the races (under compression) or lose its contact. As a result, this change in dynamic of the system makes it to be known as switching system which is a type of Hybrid systems. In this investigation by adopting Multiple Lyapunov Function theorem and using Hamiltonian function as a candidate Lyapunov function, the stability of the system is studied. The purpose of this study is to inspect the regional stability of rotor-roller bearing and rotor-ball bearing systems.

Traffic Load based Performance Analysis of DSR and STAR Routing Protocol

The wireless adhoc network is comprised of wireless node which can move freely and are connected among themselves without central infrastructure. Due to the limited transmission range of wireless interfaces, in most cases communication has to be relayed over intermediate nodes. Thus, in such multihop network each node (also called router) is independent, self-reliant and capable to route the messages over the dynamic network topology. Various protocols are reported in this field and it is very difficult to decide the best one. A key issue in deciding which type of routing protocol is best for adhoc networks is the communication overhead incurred by the protocol. In this paper STAR a table driven and DSR on demand protocols based on IEEE 802.11 are analyzed for their performance on different performance measuring metrics versus varying traffic CBR load using QualNet 5.0.2 network simulator.

The CEO Mission II, Rescue Robot with Multi-Joint Mechanical Arm

This paper presents design features of a rescue robot, named CEO Mission II. Its body is designed to be the track wheel type with double front flippers for climbing over the collapse and the rough terrain. With 125 cm. long, 5-joint mechanical arm installed on the robot body, it is deployed not only for surveillance from the top view but also easier and faster access to the victims to get their vital signs. Two cameras and sensors for searching vital signs are set up at the tip of the multi-joint mechanical arm. The third camera is at the back of the robot for driving control. Hardware and software of the system, which controls and monitors the rescue robot, are explained. The control system is used for controlling the robot locomotion, the 5-joint mechanical arm, and for turning on/off devices. The monitoring system gathers all information from 7 distance sensors, IR temperature sensors, 3 CCD cameras, voice sensor, robot wheels encoders, yawn/pitch/roll angle sensors, laser range finder and 8 spare A/D inputs. All sensors and controlling data are communicated with a remote control station via IEEE 802.11b Wi-Fi. The audio and video data are compressed and sent via another IEEE 802.11g Wi-Fi transmitter for getting real-time response. At remote control station site, the robot locomotion and the mechanical arm are controlled by joystick. Moreover, the user-friendly GUI control program is developed based on the clicking and dragging method to easily control the movement of the arm. Robot traveling map is plotted from computing the information of wheel encoders and the yawn/pitch data. 2D Obstacle map is plotted from data of the laser range finder. The concept and design of this robot can be adapted to suit many other applications. As the Best Technique awardee from Thailand Rescue Robot Championship 2006, all testing results are satisfied.

A Multimedia Telemonitoring Network for Healthcare

TELMES project aims to develop a securized multimedia system devoted to medical consultation teleservices. It will be finalized with a pilot system for a regional telecenters network that connects local telecenters, having as support multimedia platforms. This network will enable the implementation of complex medical teleservices (teleconsulations, telemonitoring, homecare, urgency medicine, etc.) for a broader range of patients and medical professionals, mainly for family doctors and those people living in rural or isolated regions. Thus, a multimedia, scalable network, based on modern IT&C paradigms, will result. It will gather two inter-connected regional telecenters, in Iaşi and Piteşti, Romania, each of them also permitting local connections of hospitals, diagnostic and treatment centers, as well as local networks of family doctors, patients, even educational entities. As communications infrastructure, we aim to develop a combined fixmobile- internet (broadband) links. Other possible communication environments will be GSM/GPRS/3G and radio waves. The electrocardiogram (ECG) acquisition, internet transmission and local analysis, using embedded technologies, was already successfully done for patients- telemonitoring.

Centralized Monitoring and Self-protected against Fiber Fault in FTTH Access Network

This paper presented a new approach for centralized monitoring and self-protected against fiber fault in fiber-to-the-home (FTTH) access network by using Smart Access Network Testing, Analyzing and Database (SANTAD). SANTAD will be installed with optical line terminal (OLT) at central office (CO) for in-service transmission surveillance and fiber fault localization within FTTH with point-to-multipoint (P2MP) configuration downwardly from CO towards customer residential locations based on the graphical user interface (GUI) processing capabilities of MATLAB software. SANTAD is able to detect any fiber fault as well as identify the failure location in the network system. SANTAD enable the status of each optical network unit (ONU) connected line is displayed onto one screen with capability to configure the attenuation and detect the failure simultaneously. The analysis results and information will be delivered to the field engineer for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.

Estimation of Individual Power of Noise Sources Operating Simultaneously

Noise has adverse effect on human health and comfort. Noise not only cause hearing impairment, but it also acts as a causal factor for stress and raising systolic pressure. Additionally it can be a causal factor in work accidents, both by marking hazards and warning signals and by impeding concentration. Industry workers also suffer psychological and physical stress as well as hearing loss due to industrial noise. This paper proposes an approach to enable engineers to point out quantitatively the noisiest source for modification, while multiple machines are operating simultaneously. The model with the point source and spherical radiation in a free field was adopted to formulate the problem. The procedure works very well in ideal cases (point source and free field). However, most of the industrial noise problems are complicated by the fact that the noise is confined in a room. Reflections from the walls, floor, ceiling, and equipment in a room create a reverberant sound field that alters the sound wave characteristics from those for the free field. So the model was validated for relatively low absorption room at NIT Kurukshetra Central Workshop. The results of validation pointed out that the estimated sound power of noise sources under simultaneous conditions were on lower side, within the error limits 3.56 - 6.35 %. Thus suggesting the use of this methodology for practical implementation in industry. To demonstrate the application of the above analytical procedure for estimating the sound power of noise sources under simultaneous operating conditions, a manufacturing facility (Railway Workshop at Yamunanagar, India) having five sound sources (machines) on its workshop floor is considered in this study. The findings of the case study had identified the two most effective candidates (noise sources) for noise control in the Railway Workshop Yamunanagar, India. The study suggests that the modification in the design and/or replacement of these two identified noisiest sources (machine) would be necessary so as to achieve an effective reduction in noise levels. Further, the estimated data allows engineers to better understand the noise situations of the workplace and to revise the map when changes occur in noise level due to a workplace re-layout.

Conceptual Multidimensional Model

The data is available in abundance in any business organization. It includes the records for finance, maintenance, inventory, progress reports etc. As the time progresses, the data keep on accumulating and the challenge is to extract the information from this data bank. Knowledge discovery from these large and complex databases is the key problem of this era. Data mining and machine learning techniques are needed which can scale to the size of the problems and can be customized to the application of business. For the development of accurate and required information for particular problem, business analyst needs to develop multidimensional models which give the reliable information so that they can take right decision for particular problem. If the multidimensional model does not possess the advance features, the accuracy cannot be expected. The present work involves the development of a Multidimensional data model incorporating advance features. The criterion of computation is based on the data precision and to include slowly change time dimension. The final results are displayed in graphical form.

Effective Software-Based Solution for Processing Mass Downstream Data in Interactive Push VOD System

Interactive push VOD system is a new kind of system that incorporates push technology and interactive technique. It can push movies to users at high speeds at off-peak hours for optimal network usage so as to save bandwidth. This paper presents effective software-based solution for processing mass downstream data at terminals of interactive push VOD system, where the service can download movie according to a viewer-s selection. The downstream data is divided into two catalogs: (1) the carousel data delivered according to DSM-CC protocol; (2) IP data delivered according to Euro-DOCSIS protocol. In order to accelerate download speed and reduce data loss rate at terminals, this software strategy introduces caching, multi-thread and resuming mechanisms. The experiments demonstrate advantages of the software-based solution.

Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings

Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].

Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

I2Navi: An Indoor Interactive NFC Navigation System for Android Smartphones

The advancement of smartphones, wireless networking and Near Field Communication (NFC) technology have opened up a new approach to indoor navigation. Although NFC technology has been used to support electronic commerce, access control, and ticketing, there is a lack of research work on building NFC-based indoor navigation system for smartphone users. This paper presents an indoor interactive navigation system (named I2Navi) based on NFC technology for users to navigate within a building with ease using their smartphones. The I2Navi system has been implemented at the Faculty of Engineering (FOE), Multimedia University (MMU) to enable students, parents, visitors who own NFC-enabled Android smartphones to navigate themselves within the faculty. An evaluation is carried out and the results show positive response to the proposed indoor navigation system using NFC and smartphone technologies.

Statistical Evaluation of Nonlinear Distortion using the Multi-Canonical Monte Carlo Method and the Split Step Fourier Method

In high powered dense wavelength division multiplexed (WDM) systems with low chromatic dispersion, four-wave mixing (FWM) can prove to be a major source of noise. The MultiCanonical Monte Carlo Method (MCMC) and the Split Step Fourier Method (SSFM) are combined to accurately evaluate the probability density function of the decision variable of a receiver, limited by FWM. The combination of the two methods leads to more accurate results, and offers the possibility of adding other optical noises such as the Amplified Spontaneous Emission (ASE) noise.

Spatial Distribution and Risk Assessment of As, Hg, Co and Cr in Kaveh Industrial City, using Geostatistic and GIS

The concentrations of As, Hg, Co, Cr and Cd were tested for each soil sample, and their spatial patterns were analyzed by the semivariogram approach of geostatistics and geographical information system technology. Multivariate statistic approaches (principal component analysis and cluster analysis) were used to identify heavy metal sources and their spatial pattern. Principal component analysis coupled with correlation between heavy metals showed that primary inputs of As, Hg and Cd were due to anthropogenic while, Co, and Cr were associated with pedogenic factors. Ordinary kriging was carried out to map the spatial patters of heavy metals. The high pollution sources evaluated was related with usage of urban and industrial wastewater. The results of this study helpful for risk assessment of environmental pollution for decision making for industrial adjustment and remedy soil pollution.

Speaker Identification Using Admissible Wavelet Packet Based Decomposition

Mel Frequency Cepstral Coefficient (MFCC) features are widely used as acoustic features for speech recognition as well as speaker recognition. In MFCC feature representation, the Mel frequency scale is used to get a high resolution in low frequency region, and a low resolution in high frequency region. This kind of processing is good for obtaining stable phonetic information, but not suitable for speaker features that are located in high frequency regions. The speaker individual information, which is non-uniformly distributed in the high frequencies, is equally important for speaker recognition. Based on this fact we proposed an admissible wavelet packet based filter structure for speaker identification. Multiresolution capabilities of wavelet packet transform are used to derive the new features. The proposed scheme differs from previous wavelet based works, mainly in designing the filter structure. Unlike others, the proposed filter structure does not follow Mel scale. The closed-set speaker identification experiments performed on the TIMIT database shows improved identification performance compared to other commonly used Mel scale based filter structures using wavelets.