Project Management Success for Contractors

The aim of this paper is to provide a better understanding of the implementation of Project Management practices by UiTM contractors to ensure project success. A questionnaire survey was administered to 120 UiTM contractors in Malaysia. The purpose of this method was to gather information on the contractors- project background and project management skills. It was found that all of the contractors had basic knowledge and understanding of project management skills. It is suggested that a reasonable project plan and an appropriate organizational structure are influential factors for project success. It is recommended that the contractors need to have an effective program of work and up to date information system are emphasized.

Comparison of Response Surface Designs in a Spherical Region

The objective of the research is to study and compare response surface designs: Central composite designs (CCD), Box- Behnken designs (BBD), Small composite designs (SCD), Hybrid designs, and Uniform shell designs (USD) over sets of reduced models when the design is in a spherical region for 3 and 4 design variables. The two optimality criteria ( D and G ) are considered which larger values imply a better design. The comparison of design optimality criteria of the response surface designs across the full second order model and sets of reduced models for 3 and 4 factors based on the two criteria are presented.

Estimating of the Renewal Function with Heavy-tailed Claims

We develop a new estimator of the renewal function for heavy-tailed claims amounts. Our approach is based on the peak over threshold method for estimating the tail of the distribution with a generalized Pareto distribution. The asymptotic normality of an appropriately centered and normalized estimator is established, and its performance illustrated in a simulation study.

School Design and Energy Efficiency

Auckland has a temperate climate with comfortable warm, dry summers and mild, wet winters. An Auckland school normally does not need air conditioning for cooling during the summer and only need heating during the winter. The space hating energy is the major portion of winter school energy consumption and the winter energy consumption is major portion of annual school energy consumption. School building thermal design should focus on the winter thermal performance for reducing the space heating energy. A number of Auckland schools- design data and energy consumption data are used for this study. This pilot study investigates the relationships between their energy consumption data and school building design data to improve future school design for energy efficiency.

Computational Algorithm for Obtaining Abelian Subalgebras in Lie Algebras

The set of all abelian subalgebras is computationally obtained for any given finite-dimensional Lie algebra, starting from the nonzero brackets in its law. More concretely, an algorithm is described and implemented to compute a basis for each nontrivial abelian subalgebra with the help of the symbolic computation package MAPLE. Finally, it is also shown a brief computational study for this implementation, considering both the computing time and the used memory.

An Intelligent Water Drop Algorithm for Solving Economic Load Dispatch Problem

Economic Load Dispatch (ELD) is a method of determining the most efficient, low-cost and reliable operation of a power system by dispatching available electricity generation resources to supply load on the system. The primary objective of economic dispatch is to minimize total cost of generation while honoring operational constraints of available generation resources. In this paper an intelligent water drop (IWD) algorithm has been proposed to solve ELD problem with an objective of minimizing the total cost of generation. Intelligent water drop algorithm is a swarm-based natureinspired optimization algorithm, which has been inspired from natural rivers. A natural river often finds good paths among lots of possible paths in its ways from source to destination and finally find almost optimal path to their destination. These ideas are embedded into the proposed algorithm for solving economic load dispatch problem. The main advantage of the proposed technique is easy is implement and capable of finding feasible near global optimal solution with less computational effort. In order to illustrate the effectiveness of the proposed method, it has been tested on 6-unit and 20-unit test systems with incremental fuel cost functions taking into account the valve point-point loading effects. Numerical results shows that the proposed method has good convergence property and better in quality of solution than other algorithms reported in recent literature.

The Effect of Different Compression Schemes on Speech Signals

This paper studies the effect of different compression constraints and schemes presented in a new and flexible paradigm to achieve high compression ratios and acceptable signal to noise ratios of Arabic speech signals. Compression parameters are computed for variable frame sizes of a level 5 to 7 Discrete Wavelet Transform (DWT) representation of the signals for different analyzing mother wavelet functions. Results are obtained and compared for Global threshold and level dependent threshold techniques. The results obtained also include comparisons with Signal to Noise Ratios, Peak Signal to Noise Ratios and Normalized Root Mean Square Error.

Toward An Agreement on Semantic Web Architecture

There are many problems associated with the World Wide Web: getting lost in the hyperspace; the web content is still accessible only to humans and difficulties of web administration. The solution to these problems is the Semantic Web which is considered to be the extension for the current web presents information in both human readable and machine processable form. The aim of this study is to reach new generic foundation architecture for the Semantic Web because there is no clear architecture for it, there are four versions, but still up to now there is no agreement for one of these versions nor is there a clear picture for the relation between different layers and technologies inside this architecture. This can be done depending on the idea of previous versions as well as Gerber-s evaluation method as a step toward an agreement for one Semantic Web architecture.

Decision Making using Maximization of Negret

We analyze the problem of decision making under ignorance with regrets. Recently, Yager has developed a new method for decision making where instead of using regrets he uses another type of transformation called negrets. Basically, the negret is considered as the dual of the regret. We study this problem in detail and we suggest the use of geometric aggregation operators in this method. For doing this, we develop a different method for constructing the negret matrix where all the values are positive. The main result obtained is that now the model is able to deal with negative numbers because of the transformation done in the negret matrix. We further extent these results to another model developed also by Yager about mixing valuations and negrets. Unfortunately, in this case we are not able to deal with negative numbers because the valuations can be either positive or negative.

Seat Assignment Problem Optimization

In this paper the optimality of the solution of an existing real word assignment problem known as the seat assignment problem using Seat Assignment Method (SAM) is discussed. SAM is the newly driven method from three existing methods, Hungarian Method, Northwest Corner Method and Least Cost Method in a special way that produces the easiness & fairness among all methods that solve the seat assignment problem.

DWT Based Robust Watermarking Embed Using CRC-32 Techniques

As far as the latest technological improvements are concerned, digital systems more become popular than the past. Despite this growing demand to the digital systems, content copy and attack against the digital cinema contents becomes a serious problem. To solve the above security problem, we propose “traceable watermarking using Hash functions for digital cinema system. Digital Cinema is a great application for traceable watermarking since it uses watermarking technology during content play as well as content transmission. The watermark is embedded into the randomly selected movie frames using CRC-32 techniques. CRC-32 is a Hash function. Using it, the embedding position is distributed by Hash Function so that any party cannot break off the watermarking or will not be able to change. Finally, our experimental results show that proposed DWT watermarking method using CRC-32 is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.

A Novel Method for Live Debugging of Production Web Applications by Dynamic Resource Replacement

This paper proposes a novel methodology for enabling debugging and tracing of production web applications without affecting its normal flow and functionality. This method of debugging enables developers and maintenance engineers to replace a set of existing resources such as images, server side scripts, cascading style sheets with another set of resources per web session. The new resources will only be active in the debug session and other sessions will not be affected. This methodology will help developers in tracing defects, especially those that appear only in production environments and in exploring the behaviour of the system. A realization of the proposed methodology has been implemented in Java.

Culture of Oleaginous Yeasts in Dairy Industry Wastewaters to Obtain Lipids Suitable for the Production of II-Generation Biodiesel

The oleaginous yeasts Lipomyces starkey were grown in the presence of dairy industry wastewaters (DIW). The yeasts were able to degrade the organic components of DIW and to produce a significant fraction of their biomass as triglycerides. When using DIW from the Ricotta cheese production or residual whey as growth medium, the L. starkey could be cultured without dilution nor external organic supplement. On the contrary, the yeasts could only partially degrade the DIW from the Mozzarella cheese production, due to the accumulation of a metabolic product beyond the threshold of toxicity. In this case, a dilution of the DIW was required to obtain a more efficient degradation of the carbon compounds and an higher yield in oleaginous biomass. The fatty acid distribution of the microbial oils obtained showed a prevalence of oleic acid, and is compatible with the production of a II generation biodiesel offering a good resistance to oxidation as well as an excellent cold-performance.

Modification of Palm Oil Structure to Cocoa Butter Equivalent by Carica papaya Lipase- Catalyzed Interesterification

Palm oil could be converted to cocoa butter equivalent by lipase-catalyzed interesterification. The objective of this research was to investigate the structure modification of palm oil to cocoa butter equivalent using Carica papaya lipase –catalyzed interesterification. The study showed that the compositions of cocoa butter equivalent were affected by acyl donor sources, substrate ratio, initial water of enzyme, reaction time, reaction temperature and the amount of enzyme. Among three acyl donors tested (methyl stearate, ethyl stearate and stearic acid), methyl stearate appeared to be the best acyl donor for incorporation to palm oil structure. The best reaction conditions for cocoa butter equivalent production were : substrate ratio (palm oil : methyl stearate, mol/mol) at 1 : 4, water activity of enzyme at 0.11, reaction time at 4 h, reaction temperature at 45 ° C and 18% by weight of the enzyme. The chemical and physical properties of cocoa butter equivalent were 9.75 ± 0.41% free fatty acid, 44.89 ± 0.84 iodine number, 193.19 ± 0.78 sponification value and melting point at 37-39 °C.

Suggestion of Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Finite Difference Analysis, Development and Clinical Trials

The disaster from functional gastrointestinal disorders has detrimental impact on the quality of life of the effected population and imposes a tremendous social and economic burden. There are, however, rare diagnostic methods for the functional gastrointestinal disorders. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Objective of current study is, therefore, identify feasibility of a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristics above. Two-dimensional finite difference (FD) models (one normal and two rigid model) were developed to analyze the reflective characteristic (displacement) on each soft-tissue layer responded after application of ultrasound signals. The FD analysis was then based on elastic ultrasound theory. Validation of the model was performed via comparison of the characteristic of the ultrasonic responses predicted by FD analysis with that determined from the actual specimens for the normal and rigid conditions. Based on the results from FD analysis, ultrasound system for diagnosis of the functional gastrointestinal disorders was developed and clinically tested via application of it to 40 human subjects with/without functional gastrointestinal disorders who were assigned to Normal and Patient Groups. The FD models were favorably validated. The results from FD analysis showed that the maximum displacement amplitude in the rigid models (0.12 and 0.16) at the interface between the fat and muscle layers was explicitly less than that in the normal model (0.29). The results from actual specimens showed that the maximum amplitude of the ultrasonic reflective signal in the rigid models (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal model (0.1±0.2 Vp-p). Clinical tests using our customized ultrasound system showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3 Vp-p) were generally higher than those in normal group (0.1±0.2 Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These findings suggest that our customized ultrasound system using the ultrasonic reflective signal may be helpful to the diagnosis of the functional gastrointestinal disorders.

Multiscale Blind Image Restoration with a New Method

A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.

Crystalline Graphene Nanoribbons with Atomically Smooth Edges via a Novel Physico- Chemical Route

A novel physico-chemical route to produce few layer graphene nanoribbons with atomically smooth edges is reported, via acid treatment (H2SO4:HNO3) followed by characteristic thermal shock processes involving extremely cold substances. Samples were studied by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Raman spectroscopy and X-ray photoelectron spectroscopy. This method demonstrates the importance of having the nanotubes open ended for an efficient uniform unzipping along the nanotube axis. The average dimensions of these nanoribbons are approximately ca. 210 nm wide and consist of few layers, as observed by transmission electron microscopy. The produced nanoribbons exhibit different chiralities, as observed by high resolution transmission electron microscopy. This method is able to provide graphene nanoribbons with atomically smooth edges which could be used in various applications including sensors, gas adsorption materials, composite fillers, among others.

Automated Detection of Alzheimer Disease Using Region Growing technique and Artificial Neural Network

Alzheimer is known as the loss of mental functions such as thinking, memory, and reasoning that is severe enough to interfere with a person's daily functioning. The appearance of Alzheimer Disease symptoms (AD) are resulted based on which part of the brain has a variety of infection or damage. In this case, the MRI is the best biomedical instrumentation can be ever used to discover the AD existence. Therefore, this paper proposed a fusion method to distinguish between the normal and (AD) MRIs. In this combined method around 27 MRIs collected from Jordanian Hospitals are analyzed based on the use of Low pass -morphological filters to get the extracted statistical outputs through intensity histogram to be employed by the descriptive box plot. Also, the artificial neural network (ANN) is applied to test the performance of this approach. Finally, the obtained result of t-test with confidence accuracy (95%) has compared with classification accuracy of ANN (100 %). The robust of the developed method can be considered effectively to diagnose and determine the type of AD image.

A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.