Study on Extraction of Lanthanum Oxide from Monazite Concentrate

Lanthanum oxide is to be recovered from monazite, which contains about 13.44% lanthanum oxide. The principal objective of this study is to be able to extract lanthanum oxide from monazite of Moemeik Myitsone Area. The treatment of monazite in this study involves three main steps; extraction of lanthanum hydroxide from monazite by using caustic soda, digestion with nitric acid and precipitation with ammonium hydroxide and calcination of lanthanum oxalate to lanthanum oxide.

Wound Healing Effect of Ocimum sanctum Leaves Extract in Diabetic Rats

Delayed wound healing in diabetes is primarily associated with hyperglycemia, over-expression of inflammatory marker, oxidative stress and delayed collagen synthesis. This unmanaged wound is producing high economic burden on the society. Thus research is required to develop new and effective treatment strategies to deal with this emerging issue. Our present study incorporates the evaluation of wound healing effects of 50% ethanol extract of Ocimum sanctum (OSE) in streptozotocin (45mg/kg)-induced diabetic rats with concurrent wound ulcer. The animals showing diabetes (Blood glucose level >140 and

The Household-Based Socio-Economic Index for Every District in Peninsular Malaysia

Deprivation indices are widely used in public health study. These indices are also referred as the index of inequalities or disadvantage. Even though, there are many indices that have been built before, it is believed to be less appropriate to use the existing indices to be applied in other countries or areas which had different socio-economic conditions and different geographical characteristics. The objective of this study is to construct the index based on the geographical and socio-economic factors in Peninsular Malaysia which is defined as the weighted household-based deprivation index. This study has employed the variables based on household items, household facilities, school attendance and education level obtained from Malaysia 2000 census report. The factor analysis is used to extract the latent variables from indicators, or reducing the observable variable into smaller amount of components or factor. Based on the factor analysis, two extracted factors were selected, known as Basic Household Amenities and Middle-Class Household Item factor. It is observed that the district with a lower index values are located in the less developed states like Kelantan, Terengganu and Kedah. Meanwhile, the areas with high index values are located in developed states such as Pulau Pinang, W.P. Kuala Lumpur and Selangor.

Semantic Web Technologies in e - Government

e-Government is already in its second decade. Prerequisite for further development and adaptation to new realities is the optimal management of administrative information and knowledge production by those involved, i.e. the public sector, citizens and businesses. Nowadays, the amount of information displayed or distributed on the Internet has reached enormous dimensions, resulting in serious difficulties when extracting and managing knowledge. The semantic web is expected to play an important role in solving this problem and the technologies that support it. In this article, we address some relevant issues.

Spectral Analysis of Speech: A New Technique

ICA which is generally used for blind source separation problem has been tested for feature extraction in Speech recognition system to replace the phoneme based approach of MFCC. Applying the Cepstral coefficients generated to ICA as preprocessing has developed a new signal processing approach. This gives much better results against MFCC and ICA separately, both for word and speaker recognition. The mixing matrix A is different before and after MFCC as expected. As Mel is a nonlinear scale. However, cepstrals generated from Linear Predictive Coefficient being independent prove to be the right candidate for ICA. Matlab is the tool used for all comparisons. The database used is samples of ISOLET.

Watermark-based Counter for Restricting Digital Audio Consumption

In this paper we introduce three watermarking methods that can be used to count the number of times that a user has played some content. The proposed methods are tested with audio content in our experimental system using the most common signal processing attacks. The test results show that the watermarking methods used enable the watermark to be extracted under the most common attacks with a low bit error rate.

Determination of Penicillins Residues in Livestock and Marine Products by LC/MS/MS

Multi-residue analysis method for penicillins was developed and validated in bovine muscle, chicken, milk, and flatfish. Detection was based on liquid chromatography tandem mass spectrometry (LC/MS/MS). The developed method was validated for specificity, precision, recovery, and linearity. The analytes were extracted with 80% acetonitrile and clean-up by a single reversed-phase solid-phase extraction step. Six penicillins presented recoveries higher than 76% with the exception of Amoxicillin (59.7%). Relative standard deviations (RSDs) were not more than 10%. LOQs values ranged from 0.1 and to 4.5 ug/kg. The method was applied to 128 real samples. Benzylpenicillin was detected in 15 samples and Cloxacillin was detected in 7 samples. Oxacillin was detected in 2 samples. But the detected levels were under the MRL levels for penicillins in samples.

Beam and Diffuse Solar Energy in Zarqa City

Beam and diffuse radiation data are extracted analytically from previous measured data on a horizontal surface in Zarqa city. Moreover, radiation data on a tilted surfaces with different slopes have been derived and analyzed. These data are consisting of of beam contribution, diffuse contribution, and ground reflected contribution radiation. Hourly radiation data for horizontal surface possess the highest radiation values on June, and then the values decay as the slope increases and the sharp decreasing happened for vertical surface. The beam radiation on a horizontal surface owns the highest values comparing to diffuse radiation for all days of June. The total daily radiation on the tilted surface decreases with slopes. The beam radiation data also decays with slopes especially for vertical surface. Diffuse radiation slightly decreases with slopes with sharp decreases for vertical surface. The groundreflected radiation grows with slopes especially for vertical surface. It-s clear that in June the highest harvesting of solar energy occurred for horizontal surface, then the harvesting decreases as the slope increases.

Energy Distribution of EEG Signals: EEG Signal Wavelet-Neural Network Classifier

In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.

The Performance Improvement of Automatic Modulation Recognition Using Simple Feature Manipulation, Analysis of the HOS, and Voted Decision

The use of High Order Statistics (HOS) analysis is expected to provide so many candidates of features that can be selected for pattern recognition. More candidates of the feature can be extracted using simple manipulation through a specific mathematical function prior to the HOS analysis. Feature extraction method using HOS analysis combined with Difference to the Nth-Power manipulation has been examined in application for Automatic Modulation Recognition (AMR) to perform scheme recognition of three digital modulation signal, i.e. QPSK-16QAM-64QAM in the AWGN transmission channel. The simulation results is reported when the analysis of HOS up to order-12 and the manipulation of Difference to the Nth-Power up to N = 4. The obtained accuracy rate of AMR using the method of Simple Decision obtained 90% in SNR > 10 dB in its classifier, while using the method of Voted Decision is 96% in SNR > 2 dB.

Using Data Fusion for Biometric Verification

A wide spectrum of systems require reliable personal recognition schemes to either confirm or determine the identity of an individual person. This paper considers multimodal biometric system and their applicability to access control, authentication and security applications. Strategies for feature extraction and sensor fusion are considered and contrasted. Issues related to performance assessment, deployment and standardization are discussed. Finally future directions of biometric systems development are discussed.

ROC Analysis of PVC Detection Algorithm using ECG and Vector-ECG Charateristics

ECG analysis method was developed using ROC analysis of PVC detecting algorithm. ECG signal of MIT-BIH arrhythmia database was analyzed by MATLAB. First of all, the baseline was removed by median filter to preprocess the ECG signal. R peaks were detected for ECG analysis method, and normal VCG was extracted for VCG analysis method. Four PVC detecting algorithm was analyzed by ROC curve, which parameters are maximum amplitude of QRS complex, width of QRS complex, r-r interval and geometric mean of VCG. To set cut-off value of parameters, ROC curve was estimated by true-positive rate (sensitivity) and false-positive rate. sensitivity and false negative rate (specificity) of ROC curve calculated, and ECG was analyzed using cut-off value which was estimated from ROC curve. As a result, PVC detecting algorithm of VCG geometric mean have high availability, and PVC could be detected more accurately with amplitude and width of QRS complex.

Named Entity Recognition using Support Vector Machine: A Language Independent Approach

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.

Feature Extraction for Surface Classification – An Approach with Wavelets

Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.

A Copyright Protection Scheme for Color Images using Secret Sharing and Wavelet Transform

This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.

Web Content Mining: A Solution to Consumer's Product Hunt

With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.

Key Frames Extraction for Sign Language Video Analysis and Recognition

In this paper we proposed a method for finding video frames representing one sign in the finger alphabet. The method is based on determining hands location, segmentation and the use of standard video quality evaluation metrics. Metric calculation is performed only in regions of interest. Sliding mechanism for finding local extrema and adaptive threshold based on local averaging is used for key frames selection. The success rate is evaluated by recall, precision and F1 measure. The method effectiveness is compared with metrics applied to all frames. Proposed method is fast, effective and relatively easy to realize by simple input video preprocessing and subsequent use of tools designed for video quality measuring.

Mining Genes Relations in Microarray Data Combined with Ontology in Colon Cancer Automated Diagnosis System

MATCH project [1] entitle the development of an automatic diagnosis system that aims to support treatment of colon cancer diseases by discovering mutations that occurs to tumour suppressor genes (TSGs) and contributes to the development of cancerous tumours. The constitution of the system is based on a) colon cancer clinical data and b) biological information that will be derived by data mining techniques from genomic and proteomic sources The core mining module will consist of the popular, well tested hybrid feature extraction methods, and new combined algorithms, designed especially for the project. Elements of rough sets, evolutionary computing, cluster analysis, self-organization maps and association rules will be used to discover the annotations between genes, and their influence on tumours [2]-[11]. The methods used to process the data have to address their high complexity, potential inconsistency and problems of dealing with the missing values. They must integrate all the useful information necessary to solve the expert's question. For this purpose, the system has to learn from data, or be able to interactively specify by a domain specialist, the part of the knowledge structure it needs to answer a given query. The program should also take into account the importance/rank of the particular parts of data it analyses, and adjusts the used algorithms accordingly.

Palmprint based Cancelable Biometric Authentication System

A cancelable palmprint authentication system proposed in this paper is specifically designed to overcome the limitations of the contemporary biometric authentication system. In this proposed system, Geometric and pseudo Zernike moments are employed as feature extractors to transform palmprint image into a lower dimensional compact feature representation. Before moment computation, wavelet transform is adopted to decompose palmprint image into lower resolution and dimensional frequency subbands. This reduces the computational load of moment calculation drastically. The generated wavelet-moment based feature representation is used to generate cancelable verification key with a set of random data. This private binary key can be canceled and replaced. Besides that, this key also possesses high data capture offset tolerance, with highly correlated bit strings for intra-class population. This property allows a clear separation of the genuine and imposter populations, as well as zero Equal Error Rate achievement, which is hardly gained in the conventional biometric based authentication system.

The Tyrosinase and Cyclooxygenase Inhibitory Activities and Cytotoxicity Screening of Tamarindus indica Seeds

The methanolic extracts from seeds of tamarind (Tamarindus indica) was prepared by Soxhlet apparatus extraction and evaluated for total phenolic content by Folin-Ciocalteu method. Then, methanolic extract was screened biological activities (In vitro) for anti-melanogenic activity by tyrosinase inhibition test, antiinflammation activity by cyclooxygenase 1 (COX-1) and cyclooxygenase 2 (COX-2) inhibition test, and cytotoxic screening test with Vero cells. The results showed that total phenolic content, which contained in extract, was contained 27.72 mg of gallic acid equivalent per g of dry weight. The ability to inhibit tyrosinase enzyme, which exerted by Tamarind seed extracts (1 mg/ml) was 52.13 ± 0.42 %. The extract was not possessed inhibitory effect to COX-1 and COX-2 enzymes and cytotoxic effect to Vero cells. The finding is concludes that tested seed extract was possessed antimelanogenic activity with non-toxic effects. However, there was not exhibited anti-inflammatory activity. Further studies include the use of advance biological models to confirm this biological activity, as well as, the isolation and characterization of the purified compounds that it was contained.