Abstract: LDPC codes could be used in magnetic storage devices because of their better decoding performance compared to other error correction codes. However, their hardware implementation results in large and complex decoders. This one of the main obstacles the decoders to be incorporated in magnetic storage devices. We construct small high girth and rate 2 columnweight codes from cage graphs. Though these codes have low performance compared to higher column weight codes, they are easier to implement. The ease of implementation makes them more suitable for applications such as magnetic recording. Cages are the smallest known regular distance graphs, which give us the smallest known column-weight 2 codes given the size, girth and rate of the code.
Abstract: A feed-forward, back-propagation Artificial Neural
Network (ANN) model has been used to forecast the occurrences of
wastewater overflows in a combined sewerage reticulation system.
This approach was tested to evaluate its applicability as a method
alternative to the common practice of developing a complete
conceptual, mathematical hydrological-hydraulic model for the
sewerage system to enable such forecasts. The ANN approach
obviates the need for a-priori understanding and representation of the
underlying hydrological hydraulic phenomena in mathematical terms
but enables learning the characteristics of a sewer overflow from the
historical data.
The performance of the standard feed-forward, back-propagation
of error algorithm was enhanced by a modified data normalizing
technique that enabled the ANN model to extrapolate into the
territory that was unseen by the training data. The algorithm and the
data normalizing method are presented along with the ANN model
output results that indicate a good accuracy in the forecasted sewer
overflow rates. However, it was revealed that the accurate
forecasting of the overflow rates are heavily dependent on the
availability of a real-time flow monitoring at the overflow structure
to provide antecedent flow rate data. The ability of the ANN to
forecast the overflow rates without the antecedent flow rates (as is
the case with traditional conceptual reticulation models) was found to
be quite poor.
Abstract: Drying characteristics of rough rice (variety of lenjan) with an initial moisture content of 25% dry basis (db) was studied in a hot air dryer assisted by infrared heating. Three arrival air temperatures (30, 40 and 500C) and four infrared radiation intensities (0, 0.2 , 0.4 and 0.6 W/cm2) and three arrival air speeds (0.1, 0.15 and 0.2 m.s-1) were studied. Bending strength of brown rice kernel, percentage of cracked kernels and time of drying were measured and evaluated. The results showed that increasing the drying arrival air temperature and radiation intensity of infrared resulted decrease in drying time. High bending strength and low percentage of cracked kernel was obtained when paddy was dried by hot air assisted infrared dryer. Between this factors and their interactive effect were a significant difference (p
Abstract: Among other factors that characterize satellite communication
channels is their high bit error rate. We present a system for
still image transmission over noisy satellite channels. The system
couples image compression together with error control codes to
improve the received image quality while maintaining its bandwidth
requirements. The proposed system is tested using a high resolution
satellite imagery simulated over the Rician fading channel. Evaluation
results show improvement in overall system including image quality
and bandwidth requirements compared to similar systems with different
coding schemes.
Abstract: As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Abstract: Schema matching plays a key role in many different
applications, such as schema integration, data integration, data
warehousing, data transformation, E-commerce, peer-to-peer data
management, ontology matching and integration, semantic Web,
semantic query processing, etc. Manual matching is expensive and
error-prone, so it is therefore important to develop techniques to
automate the schema matching process. In this paper, we present a
solution for XML schema automated matching problem which
produces semantic mappings between corresponding schema
elements of given source and target schemas. This solution
contributed in solving more comprehensively and efficiently XML
schema automated matching problem. Our solution based on
combining linguistic similarity, data type compatibility and structural
similarity of XML schema elements. After describing our solution,
we present experimental results that demonstrate the effectiveness of
this approach.
Abstract: A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noise
Abstract: A new code synchronization algorithm is proposed in
this paper for the secondary cell-search stage in wideband CDMA
systems. Rather than using the Cyclically Permutable (CP) code in the
Secondary Synchronization Channel (S-SCH) to simultaneously
determine the frame boundary and scrambling code group, the new
synchronization algorithm implements the same function with less
system complexity and less Mean Acquisition Time (MAT). The
Secondary Synchronization Code (SSC) is redesigned by splitting into
two sub-sequences. We treat the information of scrambling code group
as data bits and use simple time diversity BCH coding for further
reliability. It avoids involved and time-costly Reed-Solomon (RS)
code computations and comparisons. Analysis and simulation results
show that the Synchronization Error Rate (SER) yielded by the new
algorithm in Rayleigh fading channels is close to that of the
conventional algorithm in the standard. This new synchronization
algorithm reduces system complexities, shortens the average
cell-search time and can be implemented in the slot-based cell-search
pipeline. By taking antenna diversity and pipelining correlation
processes, the new algorithm also shows its flexible application in
multiple antenna systems.
Abstract: Load forecasting has become in recent years one of the major areas of research in electrical engineering. Most traditional forecasting models and artificial intelligence neural network techniques have been tried out in this task. Artificial neural networks (ANN) have lately received much attention, and a great number of papers have reported successful experiments and practical tests. This article presents the development of an ANN-based short-term load forecasting model with improved generalization technique for the Regional Power Control Center of Saudi Electricity Company, Western Operation Area (SEC-WOA). The proposed ANN is trained with weather-related data and historical electric load-related data using the data from the calendar years 2001, 2002, 2003, and 2004 for training. The model tested for one week at five different seasons, typically, winter, spring, summer, Ramadan and fall seasons, and the mean absolute average error for one hour-ahead load forecasting found 1.12%.
Abstract: THEOS is the first earth observation spacecraft of Thailand which was launched on the 1st October 2008 and is currently operated by GISTDA. The transfer phase has been performed by Astrium Flight Dynamics team leading to a hand over to GISTDA teams starting mid-October 2008. The THEOS spacecraft-s orbit is LEO and has the same repetitivity (14+5/26) as the SPOT spacecraft, i.e. the same altitude of 822 km but it has a different mean local solar time (LST). Ground track maintenance manoeuvres are performed to maintain the ground track within a predefined control band around the reference ground track and the band is ±40 km for THEOS spacecraft. This paper presents the first ground track maintenance manoeuvre of THEOS spacecraft and the detailed results. In addition, it also includes one and a half year of operation as seen by GISTDA operators. It finally describes the foreseenable activities for the next orbit control manoeuvre (OCM) preparation.
Abstract: Malware is software which was invented and meant for doing harms on computers. Malware is becoming a significant threat in computer network nowadays. Malware attack is not just only involving financial lost but it can also cause fatal errors which may cost lives in some cases. As new Internet Protocol version 6 (IPv6) emerged, many people believe this protocol could solve most malware propagation issues due to its broader addressing scheme. As IPv6 is still new compares to native IPv4, some transition mechanisms have been introduced to promote smoother migration. Unfortunately, these transition mechanisms allow some malwares to propagate its attack from IPv4 to IPv6 network environment. In this paper, a proof of concept shall be presented in order to show that some existing IPv4 malware detection technique need to be improvised in order to detect malware attack in dual-stack network more efficiently. A testbed of dual-stack network environment has been deployed and some genuine malware have been released to observe their behaviors. The results between these different scenarios will be analyzed and discussed further in term of their behaviors and propagation methods. The results show that malware behave differently on IPv6 from the IPv4 network protocol on the dual-stack network environment. A new detection technique is called for in order to cater this problem in the near future.
Abstract: The tree structured approach of non-uniform filterbank
(NUFB) is normally used in perfect reconstruction (PR). The PR is
not always feasible due to certain limitations, i.e, constraints in
selecting design parameters, design complexity and some times
output is severely affected by aliasing error if necessary and
sufficient conditions of PR is not satisfied perfectly. Therefore, there
has been generalized interest of researchers to go for near perfect
reconstruction (NPR). In this proposed work, an optimized tree
structure technique is used for the design of NPR non-uniform
filterbank. Window functions of Blackman family are used to design
the prototype FIR filter. A single variable linear optimization is used
to minimize the amplitude distortion. The main feature of the
proposed design is its simplicity with linear phase property.
Abstract: The Expert Witness Testimony in the Battered
Woman Syndrome Expert witness testimony (EWT) is a kind of
information given by an expert specialized in the field (here in BWS)
to the jury in order to help the court better understand the case. EWT
does not always work in favor of the battered women. Two main
decision-making models are discussed in the paper: the Mathematical
model and the Explanation model. In the first model, the jurors
calculate ″the importance and strength of each piece of evidence″
whereas in the second model they try to integrate the EWT with the
evidence and create a coherent story that would describe the crime.
The jury often misunderstands and misjudges battered women for
their action (or in this case inaction). They assume that these women
are masochists and accept being mistreated for if a man abuses a
woman constantly, she should and could divorce him or simply leave
at any time. The research in the domain found that indeed, expert
witness testimony has a powerful influence on juror’s decisions thus
its quality needs to be further explored. One of the important factors
that need further studies is a bias called the dispositionist worldview
(a belief that what happens to people is of their own doing). This
kind of attributional bias represents a tendency to think that a
person’s behavior is due to his or her disposition, even when the
behavior is clearly attributed to the situation. Hypothesis The
hypothesis of this paper is that if a juror has a dispositionist
worldview then he or she will blame the rape victim for triggering the
assault. The juror would therefore commit the fundamental
attribution error and believe that the victim’s disposition caused the
rape and not the situation she was in. Methods The subjects in the
study were 500 randomly sampled undergraduate students from
McGill, Concordia, Université de Montréal and UQAM.
Dispositional Worldview was scored on the Dispositionist
Worldview Questionnaire. After reading the Rape Scenarios, each
student was asked to play the role of a juror and answer a
questionnaire consisting of 7 questions about the responsibility,
causality and fault of the victim. Results The results confirm the
hypothesis which states that if a juror has a dispositionist worldview
then he or she will blame the rape victim for triggering the assault.
By doing so, the juror commits the fundamental attribution error
because he will believe that the victim’s disposition, and not the
constraints or opportunities of the situation, caused the rape scenario.
Abstract: Orthogonal frequency division multiplexing (OFDM)
has developed into a popular scheme for wideband digital
communications used in consumer applications such as digital broadcasting, wireless networking and broadband internet access. In
the OFDM system, carrier frequency offset (CFO) causes intercarrier
interference (ICI) which significantly degrades the system error performance. In this paper we provide an exact evaluation method for error performance analysis of arbitrary 2-D modulation OFDM systems with CFO, and analyze the effect of CFO on error performance.
Abstract: Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Abstract: Automatic control of the robotic manipulator involves
study of kinematics and dynamics as a major issue. This paper
involves the forward and inverse kinematics of 2-DOF robotic
manipulator with revolute joints. In this study the Denavit-
Hartenberg (D-H) model is used to model robot links and joints. Also
forward and inverse kinematics solution has been achieved using
Artificial Neural Networks for 2-DOF robotic manipulator. It shows
that by using artificial neural network the solution we get is faster,
acceptable and has zero error.
Abstract: In this paper an algorithm is used to detect the color defects of ceramic tiles. First the image of a normal tile is clustered using GCMA; Genetic C-means Clustering Algorithm; those results in best cluster centers. C-means is a common clustering algorithm which optimizes an objective function, based on a measure between data points and the cluster centers in the data space. Here the objective function describes the mean square error. After finding the best centers, each pixel of the image is assigned to the cluster with closest cluster center. Then, the maximum errors of clusters are computed. For each cluster, max error is the maximum distance between its center and all the pixels which belong to it. After computing errors all the pixels of defected tile image are clustered based on the centers obtained from normal tile image in previous stage. Pixels which their distance from their cluster center is more than the maximum error of that cluster are considered as defected pixels.
Abstract: A complex valued neural network is a neural network
which consists of complex valued input and/or weights and/or thresholds
and/or activation functions. Complex-valued neural networks
have been widening the scope of applications not only in electronics
and informatics, but also in social systems. One of the most important
applications of the complex valued neural network is in signal
processing. In Neural networks, generalized mean neuron model
(GMN) is often discussed and studied. The GMN includes a new
aggregation function based on the concept of generalized mean of all
the inputs to the neuron. This paper aims to present exhaustive results
of using Generalized Mean Neuron model in a complex-valued neural
network model that uses the back-propagation algorithm (called
-Complex-BP-) for learning. Our experiments results demonstrate the
effectiveness of a Generalized Mean Neuron Model in a complex
plane for signal processing over a real valued neural network. We
have studied and stated various observations like effect of learning
rates, ranges of the initial weights randomly selected, error functions
used and number of iterations for the convergence of error required on
a Generalized Mean neural network model. Some inherent properties
of this complex back propagation algorithm are also studied and
discussed.
Abstract: Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.