Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

Arms detection is one of the fundamental problems in human motion analysis application. The arms are considered as the most challenging body part to be detected since its pose and speed varies in image sequences. Moreover, the arms are usually occluded with other body parts such as the head and torso. In this paper, histogram-based skin colour segmentation is proposed to detect the arms in image sequences. Six different colour spaces namely RGB, rgb, HSI, TSL, SCT and CIELAB are evaluated to determine the best colour space for this segmentation procedure. The evaluation is divided into three categories, which are single colour component, colour without luminance and colour with luminance. The performance is measured using True Positive (TP) and True Negative (TN) on 250 images with manual ground truth. The best colour is selected based on the highest TN value followed by the highest TP value.

Multiple Sequence Alignment Using Three- Dimensional Fragments

Background: Dialign is a DNA/Protein alignment tool for performing pairwise and multiple pairwise alignments through the comparison of gap-free segments (fragments) between sequence pairs. An alignment of two sequences is a chain of fragments, i.e local gap-free pairwise alignments, with the highest total score. METHOD: A new approach is defined in this article which relies on the concept of using three-dimensional fragments – i.e. local threeway alignments -- in the alignment process instead of twodimensional ones. These three-dimensional fragments are gap-free alignments constituting of equal-length segments belonging to three distinct sequences. RESULTS: The obtained results showed good improvments over the performance of DIALIGN.

Creating Maintenance Cost Model for University Buildings

Maintenance costs incurred on building differs. The difference can be as results of the types, functions, age, building health index, size, form height, location and complexity of the building. These are contributing to the difficulty in maintenance development of deterministic maintenance cost model. This paper is concerns with reporting the preliminary findings on the creation of building maintenance cost distributions for universities in Malaysia. This study is triggered by the need to provide guides on maintenance costs distributions for decision making. For this purpose, a survey questionnaire was conducted to investigate the distribution of maintenance costs in the universities. Altogether, responses were received from twenty universities comprising both private and publicly owned. The research found that engineering services, roofing and finishes were the elements contributing the larger segment of the maintenance costs. Furthermore, the study indicates the significance of maintenance cost distribution as decision making tool towards maintenance management.

Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary

This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.

Ottoman Script Recognition Using Hidden Markov Model

In this study, an OCR system for segmentation, feature extraction and recognition of Ottoman Scripts has been developed using handwritten characters. Detection of handwritten characters written by humans is a difficult process. Segmentation and feature extraction stages are based on geometrical feature analysis, followed by the chain code transformation of the main strokes of each character. The output of segmentation is well-defined segments that can be fed into any classification approach. The classes of main strokes are identified through left-right Hidden Markov Model (HMM).

Design, Implementation and Testing of Mobile Agent Protection Mechanism for MANETS

In the current research, we present an operation framework and protection mechanism to facilitate secure environment to protect mobile agents against tampering. The system depends on the presence of an authentication authority. The advantage of the proposed system is that security measures is an integral part of the design, thus common security retrofitting problems do not arise. This is due to the presence of AlGamal encryption mechanism to protect its confidential content and any collected data by the agent from the visited host . So that eavesdropping on information from the agent is no longer possible to reveal any confidential information. Also the inherent security constraints within the framework allow the system to operate as an intrusion detection system for any mobile agent environment. The mechanism is tested for most of the well known severe attacks against agents and networked systems. The scheme proved a promising performance that makes it very much recommended for the types of transactions that needs highly secure environments, e. g., business to business.

Integrating Low and High Level Object Recognition Steps

In pattern recognition applications the low level segmentation and the high level object recognition are generally considered as two separate steps. The paper presents a method that bridges the gap between the low and the high level object recognition. It is based on a Bayesian network representation and network propagation algorithm. At the low level it uses hierarchical structure of quadratic spline wavelet image bases. The method is demonstrated for a simple circuit diagram component identification problem.

Color Image Segmentation and Multi-Level Thresholding by Maximization of Conditional Entropy

In this work a novel approach for color image segmentation using higher order entropy as a textural feature for determination of thresholds over a two dimensional image histogram is discussed. A similar approach is applied to achieve multi-level thresholding in both grayscale and color images. The paper discusses two methods of color image segmentation using RGB space as the standard processing space. The threshold for segmentation is decided by the maximization of conditional entropy in the two dimensional histogram of the color image separated into three grayscale images of R, G and B. The features are first developed independently for the three ( R, G, B ) spaces, and combined to get different color component segmentation. By considering local maxima instead of the maximum of conditional entropy yields multiple thresholds for the same image which forms the basis for multilevel thresholding.

Developing Marketing Strategy in Nonmetallic Mineral Industry at the Business Level

This study extends research on the relationship between marketing strategy and market segmentation by investigating on market segments in the cement industry. Competitive strength and rivals distance from the factory were used as business environment. A three segment (positive, neutral or indifferent and zero zones) were identified as strategic segments. For each segment a marketing strategy (aggressive, defensive and decline) were developed. This study employed data from cement industry to fulfill two objectives, the first is to give a framework to the segmentation of cement industry and the second is developing marketing strategy with varying competitive strength. Fifty six questionnaires containing close-and open-ended questions were collected and analyzed. Results supported the theory that segments tend to be more aggressive than defensive when competitive strength increases. It is concluded that high strength segments follow total market coverage, concentric diversification and frontal attack to their competitors. With decreased competitive strength, Business tends to follow multi-market strategy, product modification/improvement and flank attack to direct competitors for this kind of segments. Segments with weak competitive strength followed focus strategy and decline strategy.

Unsupervised Texture Classification and Segmentation

An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure of the data. This improves classification accuracy compared with standard Gaussian mixture models. When applied to textures, the algorithm can learn basis functions for images that capture the statistically significant structure intrinsic in the images. We apply this technique to the problem of unsupervised texture classification and segmentation.

Why We Are Taller in the Morning than Going to Bed at Night – An in vivo and in vitro Study

Intradiscal and intervertebral pressure transducers were developed. They were used to map the pressures in the nucleus and within the annulus of the human spinal segments. Their stressrelaxation were recorded over a period of time for nucleus pressure, applied load, and peripherial strain against time. The results show that for normal discs, pressures in the nucleus are viscoelastic in nature with the applied compressive load. Mechanical strains which develop around the periphery of the vertebral body are also viscoelastic with the applied compressive load. Applied compressive load against time also shows viscoelastic behavior. However, annulus does not respond viscoelastically with the applied load. It showed a linear response to compressive loading.

User Acceptance of Location-based Services

Location-based services (LBS) exploit the known location of a user to provide services dependent on their geographic context and personalized needs [1]. The development and arrival of broadband mobile data networks supported with mobile terminals equipped with new location technologies like GPS have finally created opportunities for implementation of LBS applications. But, from the other side, collecting location information data in general raises privacy concerns. This paper presents results from two surveys of LBS acceptance in Croatia. The first survey was administered on 181 students, and the second extended survey involved pattern of 180 Croatian citizens. We developed questionnaire which consists of descriptions of 15 different applications with scale which measures perceptions and attitudes of users towards these applications. We report the results to identify potential commercial applications for LBS in B2C segment. Our findings suggest that some types of applications like emergency&safety services and navigation have significantly higher rate of acceptance than other types.

Automatic Vehicle Identification by Plate Recognition

Automatic Vehicle Identification (AVI) has many applications in traffic systems (highway electronic toll collection, red light violation enforcement, border and customs checkpoints, etc.). License Plate Recognition is an effective form of AVI systems. In this study, a smart and simple algorithm is presented for vehicle-s license plate recognition system. The proposed algorithm consists of three major parts: Extraction of plate region, segmentation of characters and recognition of plate characters. For extracting the plate region, edge detection algorithms and smearing algorithms are used. In segmentation part, smearing algorithms, filtering and some morphological algorithms are used. And finally statistical based template matching is used for recognition of plate characters. The performance of the proposed algorithm has been tested on real images. Based on the experimental results, we noted that our algorithm shows superior performance in car license plate recognition.

Building a Trend Based Segmentation Method with SVR Model for Stock Turning Detection

This research focus on developing a new segmentation method for improving forecasting model which is call trend based segmentation method (TBSM). Generally, the piece-wise linear representation (PLR) can finds some of pair of trading points is well for time series data, but in the complicated stock environment it is not well for stock forecasting because of the stock has more trends of trading. If we consider the trends of trading in stock price for the trading signal which it will improve the precision of forecasting model. Therefore, a TBSM with SVR model used to detect the trading points for various stocks of Taiwanese and America under different trend tendencies. The experimental results show our trading system is more profitable and can be implemented in real time of stock market

Posture Recognition using Combined Statistical and Geometrical Feature Vectors based on SVM

It is hard to percept the interaction process with machines when visual information is not available. In this paper, we have addressed this issue to provide interaction through visual techniques. Posture recognition is done for American Sign Language to recognize static alphabets and numbers. 3D information is exploited to obtain segmentation of hands and face using normal Gaussian distribution and depth information. Features for posture recognition are computed using statistical and geometrical properties which are translation, rotation and scale invariant. Hu-Moment as statistical features and; circularity and rectangularity as geometrical features are incorporated to build the feature vectors. These feature vectors are used to train SVM for classification that recognizes static alphabets and numbers. For the alphabets, curvature analysis is carried out to reduce the misclassifications. The experimental results show that proposed system recognizes posture symbols by achieving recognition rate of 98.65% and 98.6% for ASL alphabets and numbers respectively.

Strategies of Education and Training Practice of Small and Medium Sized Enterprises

The role of knowledge is a determinative factor in the life of economy and society. To determine knowledge is not an easy task yet the real task is to determine the right knowledge. From this view knowledge is a sum of experience, ideas and cognitions which can help companies to remain in markets and to realize a maximum profit. At the same time changes of circumstances project in advance that contents and demands of the right knowledge are changing. In this paper we will analyse a special segment on the basis of an empirical survey. We investigated the behaviour and strategies of small and medium sized enterprises (SMEs) in the area of knowledge-handling. This survey was realized by questionnaires and wide range statistical methods were used during processing. As a result we will show how these companies are prepared to operate in a knowledge-based economy and in which areas they have prominent deficiencies.

Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images

Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.

Medical Image Segmentation Using Deformable Model and Local Fitting Binary: Thoracic Aorta

This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic is the need to overcome problems associated with intensity inhomogeneities. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the level set formulation aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets, and are shown to be more effective than other approaches in coping with intensity inhomogeneities. We have applied the Courant Friedrichs Levy (CFL) condition as stability criterion for our algorithm.

Adaptive Gaussian Mixture Model for Skin Color Segmentation

Skin color based tracking techniques often assume a static skin color model obtained either from an offline set of library images or the first few frames of a video stream. These models can show a weak performance in presence of changing lighting or imaging conditions. We propose an adaptive skin color model based on the Gaussian mixture model to handle the changing conditions. Initial estimation of the number and weights of skin color clusters are obtained using a modified form of the general Expectation maximization algorithm, The model adapts to changes in imaging conditions and refines the model parameters dynamically using spatial and temporal constraints. Experimental results show that the method can be used in effectively tracking of hand and face regions.

Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia

Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.