The Investigation of Green Roof and White Roof Cooling Potential on Single Storey Residential Building in the Malaysian Climate

The phenomenon of global warming or climate change has led to many environmental issues including higher atmospheric temperatures, intense precipitation, increased greenhouse gaseous emissions and increased indoor discomfort. Studies have shown that bringing nature to the roof such as constructing green roof and implementing high-reflective roof may give positive impact in mitigating the effects of global warming and in increasing thermal comfort sensation inside buildings. However, no study has been conducted to compare both types of passive roof treatments in Malaysia in order to increase thermal comfort in buildings. Therefore, this study is conducted to investigate the effect of green roof and white painted roof as passive roof treatment in improving indoor comfort of Malaysian homes. This study uses an experimental approach in which the measurements of temperatures are conducted on the case study building. The measurements of outdoor and indoor environments were conducted on the flat roof with two different types of roof treatment that are green roof and white roof. The measurement of existing black bare roof was also conducted to act as a control for this study.

Virulent-GO: Prediction of Virulent Proteins in Bacterial Pathogens Utilizing Gene Ontology Terms

Prediction of bacterial virulent protein sequences can give assistance to identification and characterization of novel virulence-associated factors and discover drug/vaccine targets against proteins indispensable to pathogenicity. Gene Ontology (GO) annotation which describes functions of genes and gene products as a controlled vocabulary of terms has been shown effectively for a variety of tasks such as gene expression study, GO annotation prediction, protein subcellular localization, etc. In this study, we propose a sequence-based method Virulent-GO by mining informative GO terms as features for predicting bacterial virulent proteins. Each protein in the datasets used by the existing method VirulentPred is annotated by using BLAST to obtain its homologies with known accession numbers for retrieving GO terms. After investigating various popular classifiers using the same five-fold cross-validation scheme, Virulent-GO using the single kind of GO term features with an accuracy of 82.5% is slightly better than VirulentPred with 81.8% using five kinds of sequence-based features. For the evaluation of independent test, Virulent-GO also yields better results (82.0%) than VirulentPred (80.7%). When evaluating single kind of feature with SVM, the GO term feature performs much well, compared with each of the five kinds of features.

Performance Evaluation of Para-virtualization on Modern Mobile Phone Platform

Emergence of smartphones brings to live the concept of converged devices with the availability of web amenities. Such trend also challenges the mobile devices manufactures and service providers in many aspects, such as security on mobile phones, complex and long time design flow, as well as higher development cost. Among these aspects, security on mobile phones is getting more and more attention. Microkernel based virtualization technology will play a critical role in addressing these challenges and meeting mobile market needs and preferences, since virtualization provides essential isolation for security reasons and it allows multiple operating systems to run on one processor accelerating development and cutting development cost. However, virtualization benefits do not come for free. As an additional software layer, it adds some inevitable virtualization overhead to the system, which may decrease the system performance. In this paper we evaluate and analyze the virtualization performance cost of L4 microkernel based virtualization on a competitive mobile phone by comparing the L4Linux, a para-virtualized Linux on top of L4 microkernel, with the native Linux performance using lmbench and a set of typical mobile phone applications.

Zero Inflated Strict Arcsine Regression Model

Zero inflated strict arcsine model is a newly developed model which is found to be appropriate in modeling overdispersed count data. In this study, we extend zero inflated strict arcsine model to zero inflated strict arcsine regression model by taking into consideration the extra variability caused by extra zeros and covariates in count data. Maximum likelihood estimation method is used in estimating the parameters for this zero inflated strict arcsine regression model.

A New Hardware Implementation of Manchester Line Decoder

In this paper, we present a simple circuit for Manchester decoding and without using any complicated or programmable devices. This circuit can decode 90kbps of transmitted encoded data; however, greater than this transmission rate can be decoded if high speed devices were used. We also present a new method for extracting the embedded clock from Manchester data in order to use it for serial-to-parallel conversion. All of our experimental measurements have been done using simulation.

The Role of Private Equity during Global Crises

The term private equity usually refers to any type of equity investment in an asset in which the equity is not freely tradable on a public stock market. Some researchers believe that private equity contributed to the extent of the crisis and increased the pace of its spread over the world. We do not agree with this. On the other hand, we argue that during the economic recession private equity might become an important source of funds for firms with special needs (e.g. for firms seeking buyout financing, venture capital, expansion capital or distress debt financing). However, over-regulation of private equity in both the European Union and the US can slow down this specific funding channel to the economy and deepen credit crunch during global crises.

Robust Face Recognition using AAM and Gabor Features

In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM.

Stability Issues on an Implemented All-Pass Filter Circuitry

The so-called all-pass filter circuits are commonly used in the field of signal processing, control and measurement. Being connected to capacitive loads, these circuits tend to loose their stability; therefore the elaborate analysis of their dynamic behavior is necessary. The compensation methods intending to increase the stability of such circuits are discussed in this paper, including the socalled lead-lag compensation technique being treated in detail. For the dynamic modeling, a two-port network model of the all-pass filter is being derived. The results of the model analysis show, that effective lead-lag compensation can be achieved, alone by the optimization of the circuit parameters; therefore the application of additional electric components are not needed to fulfill the stability requirement.

Performance Analysis of Software Reliability Models using Matrix Method

This paper presents a computational methodology based on matrix operations for a computer based solution to the problem of performance analysis of software reliability models (SRMs). A set of seven comparison criteria have been formulated to rank various non-homogenous Poisson process software reliability models proposed during the past 30 years to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Selection of optimal SRM for use in a particular case has been an area of interest for researchers in the field of software reliability. Tools and techniques for software reliability model selection found in the literature cannot be used with high level of confidence as they use a limited number of model selection criteria. A real data set of middle size software project from published papers has been used for demonstration of matrix method. The result of this study will be a ranking of SRMs based on the Permanent value of the criteria matrix formed for each model based on the comparison criteria. The software reliability model with highest value of the Permanent is ranked at number – 1 and so on.

Improved Closed Set Text-Independent Speaker Identification by Combining MFCC with Evidence from Flipped Filter Banks

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.

Precise Measurement of Displacement using Pixels

Manufacturing processes demand tight dimensional tolerances. The paper concerns a transducer for precise measurement of displacement, based on a camera containing a linescan chip. When tests were conducted using a track of black and white stripes with a 2mm pitch, errors in measuring on individual cycle amounted to 1.75%, suggesting that a precision of 35 microns is achievable.

Optical Coherence Tomography Combined with the Confocal Microscopy Method and Fluorescence for Class V Cavities Investigations

The purpose of this study is to present a non invasive method for the marginal adaptation evaluation in class V composite restorations. Standardized class V cavities, prepared in human extracted teeth, were filled with Premise (Kerr) composite. The specimens were thermo cycled. The interfaces were examined by Optical Coherence Tomography method (OCT) combined with the confocal microscopy and fluorescence. The optical configuration uses two single mode directional couplers with a superluminiscent diode as the source at 1300 nm. The scanning procedure is similar to that used in any confocal microscope, where the fast scanning is enface (line rate) and the depth scanning is much slower (at the frame rate). Gaps at the interfaces as well as inside the composite resin materials were identified. OCT has numerous advantages which justify its use in vivo as well as in vitro in comparison with conventional techniques.

Corporate Credit Rating using Multiclass Classification Models with order Information

Corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has been one of the attractive research topics in the literature. In recent years, multiclass classification models such as artificial neural network (ANN) or multiclass support vector machine (MSVM) have become a very appealing machine learning approaches due to their good performance. However, most of them have only focused on classifying samples into nominal categories, thus the unique characteristic of the credit rating - ordinality - has been seldom considered in their approaches. This study proposes new types of ANN and MSVM classifiers, which are named OMANN and OMSVM respectively. OMANN and OMSVM are designed to extend binary ANN or SVM classifiers by applying ordinal pairwise partitioning (OPP) strategy. These models can handle ordinal multiple classes efficiently and effectively. To validate the usefulness of these two models, we applied them to the real-world bond rating case. We compared the results of our models to those of conventional approaches. The experimental results showed that our proposed models improve classification accuracy in comparison to typical multiclass classification techniques with the reduced computation resource.

Qualification and Provisioning of xDSL Broadband Lines using a GIS Approach

In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.

Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Performance Improvements of DSP Applications on a Generic Reconfigurable Platform

Speedups from mapping four real-life DSP applications on an embedded system-on-chip that couples coarsegrained reconfigurable logic with an instruction-set processor are presented. The reconfigurable logic is realized by a 2-Dimensional Array of Processing Elements. A design flow for improving application-s performance is proposed. Critical software parts, called kernels, are accelerated on the Coarse-Grained Reconfigurable Array. The kernels are detected by profiling the source code. For mapping the detected kernels on the reconfigurable logic a prioritybased mapping algorithm has been developed. Two 4x4 array architectures, which differ in their interconnection structure among the Processing Elements, are considered. The experiments for eight different instances of a generic system show that important overall application speedups have been reported for the four applications. The performance improvements range from 1.86 to 3.67, with an average value of 2.53, compared with an all-software execution. These speedups are quite close to the maximum theoretical speedups imposed by Amdahl-s law.

Development of UiTM Robotic Prosthetic Hand

The study of human hand morphology reveals that developing an artificial hand with the capabilities of human hand is an extremely challenging task. This paper presents the development of a robotic prosthetic hand focusing on the improvement of a tendon driven mechanism towards a biomimetic prosthetic hand. The design of this prosthesis hand is geared towards achieving high level of dexterity and anthropomorphism by means of a new hybrid mechanism that integrates a miniature motor driven actuation mechanism, a Shape Memory Alloy actuated mechanism and a passive mechanical linkage. The synergy of these actuators enables the flexion-extension movement at each of the finger joints within a limited size, shape and weight constraints. Tactile sensors are integrated on the finger tips and the finger phalanges area. This prosthesis hand is developed with an exact size ratio that mimics a biological hand. Its behavior resembles the human counterpart in terms of working envelope, speed and torque, and thus resembles both the key physical features and the grasping functionality of an adult hand.

Optimal Facility Layout Problem Solution Using Genetic Algorithm

Facility Layout Problem (FLP) is one of the essential problems of several types of manufacturing and service sector. It is an optimization problem on which the main objective is to obtain the efficient locations, arrangement and order of the facilities. In the literature, there are numerous facility layout problem research presented and have used meta-heuristic approaches to achieve optimal facility layout design. This paper presented genetic algorithm to solve facility layout problem; to minimize total cost function. The performance of the proposed approach was verified and compared using problems in the literature.

An Effective Framework for Chinese Syntactic Parsing

This paper presents an effective framework for Chinesesyntactic parsing, which includes two parts. The first one is a parsing framework, which is based on an improved bottom-up chart parsingalgorithm, and integrates the idea of the beam search strategy of N bestalgorithm and heuristic function of A* algorithm for pruning, then get multiple parsing trees. The second is a novel evaluation model, which integrates contextual and partial lexical information into traditional PCFG model and defines a new score function. Using this model, the tree with the highest score is found out as the best parsing tree. Finally,the contrasting experiment results are given. Keywords?syntactic parsing, PCFG, pruning, evaluation model.

Evaluation of Shear Strength Parameters of Amended Loess through Using Common Admixtures in Gorgan, Iran

Non-saturated soils that while saturation greatly decrease their volume, have sudden settlement due to increasing humidity, fracture and structural crack are called loess soils. Whereas importance of civil projects including: dams, canals and constructions bearing this type of soil and thereof problems, it is required for carrying out more research and study in relation to loess soils. This research studies shear strength parameters by using grading test, Atterberg limit, compression, direct shear and consolidation and then effect of using cement and lime additives on stability of loess soils is studied. In related tests, lime and cement are separately added to mixed ratios under different percentages of soil and for different times the stabilized samples are processed and effect of aforesaid additives on shear strength parameters of soil is studied. Results show that upon passing time the effect of additives and collapsible potential is greatly decreased and upon increasing percentage of cement and lime the maximum dry density is decreased; however, optimum humidity is increased. In addition, liquid limit and plastic index is decreased; however, plastic index limit is increased. It is to be noted that results of direct shear test reveal increasing shear strength of soil due to increasing cohesion parameter and soil friction angle.