Abstract: The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.
Abstract: Lung CT image segmentation is a prerequisite in lung
CT image analysis. Most of the conventional methods need a
post-processing to deal with the abnormal lung CT scans such as
lung nodules or other lesions. The simplest similarity measure in
the standard Graph Cuts Algorithm consists of directly comparing
the pixel values of the two neighboring regions, which is not
accurate because this kind of metrics is extremely sensitive to minor
transformations such as noise or other artifacts problems. In this work,
we propose an improved version of the standard graph cuts algorithm
based on the Patch-Based similarity metric. The boundary penalty
term in the graph cut algorithm is defined Based on Patch-Based
similarity measurement instead of the simple intensity measurement
in the standard method. The weights between each pixel and its
neighboring pixels are Based on the obtained new term. The graph
is then created using theses weights between its nodes. Finally,
the segmentation is completed with the minimum cut/Max-Flow
algorithm. Experimental results show that the proposed method is
very accurate and efficient, and can directly provide explicit lung
regions without any post-processing operations compared to the
standard method.
Abstract: This paper details the utilization of artificial intelligence (AI) in the field of slope stability whereby quick and convenient solutions can be obtained using the developed tool. The AI tool used in this study is the artificial neural network (ANN), while the slope stability analysis methods are the finite element limit analysis methods. The developed tool allows for the prompt prediction of the safety factors of fill slopes and their corresponding probability of failure (depending on the degree of variation of the soil parameters), which can give the practicing engineer a reasonable basis in their decision making. In fact, the successful use of the Extreme Learning Machine (ELM) algorithm shows that slope stability analysis is no longer confined to the conventional methods of modeling, which at times may be tedious and repetitive during the preliminary design stage where the focus is more on cost saving options rather than detailed design. Therefore, similar ANN-based tools can be further developed to assist engineers in this aspect.
Abstract: A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Abstract: Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.
Abstract: Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems.
Abstract: The purpose of this study is to compare the conventional crop monitoring system with the satellite based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space and Upper Atmosphere Research Commission). The study focused on the wheat crop, as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time and man power? The man power calculated for Punjab CRS is: 1,418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1,543,200 hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that the satellite based crop monitoring system is efficient in terms of manpower, cost and time as compared to the conventional system, and also generates early crop forecasts and estimations. The research instruments used included: Interviews, physical visits, group discussions, questionnaires, study of reports and work flows. A total of 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology, so that our agriculture will also be developed along modern lines.
Abstract: Evaluation of dynamic earth pressure on retaining wall is a topic of primary importance. In present paper, dynamic active earth pressure and displacement of flexible cantilever retaining wall has been evaluated analytically using 2-DOF mass-spring-dashpot model by incorporating both wall and backfill properties. The effect of wall flexibility on dynamic active earth pressure and wall displacement are studied and presented in graphical form. The obtained results are then compared with the various conventional methods, experimental analysis and also with PLAXIS analysis. It is observed that the dynamic active earth pressure decreases with increase in the wall flexibility while wall displacement increases linearly with flexibility of the wall. The results obtained by proposed 2-DOF analytical model are found to be more realistic and economical.
Abstract: One of the basic issues of development management is connected with performance measurement as a prerequisite for identifying the achievement of development objectives. The aim of our research is to develop an improved model of assessing a company’s development results. The model should take into account the cyclical nature of development and the high degree of uncertainty in dealing with numerous management tasks. Our hypotheses may be formulated as follows: Hypothesis 1. The cycle of a company’s development may be studied from the standpoint of a project cycle. To do that, methods and tools of project analysis are to be used. Hypothesis 2. The problem of the uncertainty when justifying managerial decisions within the framework of a company’s development cycle can be solved through the use of the mathematical apparatus of fuzzy logic. The reasoned justification of the validity of the hypotheses made is given in the suggested article. The fuzzy logic toolkit applies to the case of technology shift within an enterprise. It is proven that some restrictions in performance measurement that are incurred to conventional methods could be eliminated by implementation of the fuzzy logic apparatus in performance measurement models.
Abstract: In this talk, we introduce a newly developed quantile
function model that can be used for estimating conditional
distributions of financial returns and for obtaining multi-step ahead
out-of-sample predictive distributions of financial returns. Since we
forecast the whole conditional distributions, any predictive quantity
of interest about the future financial returns can be obtained simply
as a by-product of the method. We also show an application of the
model to the daily closing prices of Dow Jones Industrial Average
(DJIA) series over the period from 2 January 2004 - 8 October 2010.
We obtained the predictive distributions up to 15 days ahead for
the DJIA returns, which were further compared with the actually
observed returns and those predicted from an AR-GARCH model.
The results show that the new model can capture the main features
of financial returns and provide a better fitted model together with
improved mean forecasts compared with conventional methods. We
hope this talk will help audience to see that this new model has the
potential to be very useful in practice.
Abstract: STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data
Abstract: The North-eastern part of India, which receives
heavier rainfall than other parts of the subcontinent, is of great
concern now-a-days with regard to climate change. High intensity
rainfall for short duration and longer dry spell, occurring due to
impact of climate change, affects river morphology too. In the present
study, an attempt is made to delineate the North-eastern region of
India into some homogeneous clusters based on the Fuzzy Clustering
concept and to compare the resulting clusters obtained by using
conventional methods and nonconventional methods of clustering.
The concept of clustering is adapted in view of the fact that, impact
of climate change can be studied in a homogeneous region without
much variation, which can be helpful in studies related to water
resources planning and management. 10 IMD (Indian Meteorological
Department) stations, situated in various regions of the North-east,
have been selected for making the clusters. The results of the Fuzzy
C-Means (FCM) analysis show different clustering patterns for
different conditions. From the analysis and comparison it can be
concluded that nonconventional method of using GCM data is
somehow giving better results than the others. However, further
analysis can be done by taking daily data instead of monthly means to
reduce the effect of standardization.
Abstract: According to the scientific information management literature, the improper use of information technology (e.g. personal computers) by employees are one main cause for operational and information security loss events. Therefore, organizations implement information security awareness programs to increase employees’ awareness to further prevention of loss events. However, in many cases these information security awareness programs consist of conventional delivery methods like posters, leaflets, or internal messages to make employees aware of information security policies. We assume that a viral information security awareness video might be more effective medium than conventional methods commonly used by organizations. The purpose of this research is to develop a viral video artifact to improve employee security behavior concerning information technology.
Abstract: The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.
Abstract: Since the output characteristics of photovoltaic (PV) system depends on the ambient temperature, solar radiation and load impedance, its maximum power point (MPP) is not constant. Under each condition PV module has a point at which it can produce its MPP. Therefore, a maximum power point tracking (MPPT) method is needed to uphold the PV panel operating at its MPP. This paper presents comparative study between the conventional MPPT methods used in (PV) system: Perturb and Observe (P&O), Incremental Conductance (IncCond), andParticle Swarm Optimization (PSO) algorithmfor (MPPT) of (PV) system. To evaluate the study, the proposed PSO MPPT is implemented on a DC-DC cuk converter and has been compared with P&O and INcond methods in terms of their tracking speed, accuracy and performance by using the Matlab tool Simulink. The simulation result shows that the proposed algorithm is simple, and is superior to the P&O and IncCond methods.
Abstract: This paper introduces a new point estimation algorithm, with particular focus on coherent noise suppression, given several measurements of the device under test where it is assumed that 1) the noise is first-order stationery and 2) the device under test is linear and time-invariant. The algorithm exploits the robustness of the Pitman estimator of the Cauchy location parameter through the initial scaling of the test signal by a centred Gaussian variable of predetermined variance. It is illustrated through mathematical derivations and simulation results that the proposed algorithm is more accurate and consistently robust to outliers for different tailed density functions than the conventional methods of sample mean (coherent averaging technique) and sample median search.
Abstract: In this paper, we validate crater detection in moon surface image using FLDA. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) project aiming at the pin-point landing to the moon surface. The point where the lander should land is judged by the position relations of the craters obtained via camera, so the real-time image processing becomes important element. Besides, in the SLIM project, 400kg-class lander is assumed, therefore, high-performance computers for image processing cannot be equipped. We are studying various crater detection methods such as Haar-Like features, LBP, and PCA. And we think these methods are appropriate to the project, however, to identify the unlearned images obtained by actual is insufficient. In this paper, we examine the crater detection using FLDA, and compare with the conventional methods.
Abstract: Because of high ductility, aluminum alloys, have been widely used as an important base of metal forming industries. But the main week point of these alloys is their low strength so in forming them with conventional methods like deep drawing, hydro forming, etc have been always faced with problems like fracture during of forming process. Because of this, recently using of explosive forming method for forming of these plates has been recommended. In this paper free explosive forming of A2024 aluminum alloy is numerically simulated and during it, explosion wave propagation process is studied. Consequences of this simulation can be effective in prediction of quality of production. These consequences are compared with an experimental test and show the superiority of this method to similar methods like hydro forming and deep drawing.
Abstract: Generally, in order to create 3D sound using binaural
systems, we use head related transfer functions (HRTF) including the
information of sounds which is arrived to our ears. But it can decline
some three-dimensional effects in the area of a cone of confusion
between front and back directions, because of the characteristics of
HRTF.
In this paper, we propose a new method to use psychoacoustics
theory that reduces the confusion of sound image localization. In the
method, HRTF spectrum characteristic is enhanced by using the
energy ratio of the bark band. Informal listening tests show that the
proposed method improves the front-back sound localization
characteristics much better than the conventional methods
Abstract: Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.