Using Heuristic Rules from Sentence Decomposition of Experts- Summaries to Detect Students- Summarizing Strategies

Summarizing skills have been introduced to English syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system that detects the strategies used by the students in producing their summaries. Sentence decomposition of expert-written summaries is used to analyze how experts produce their summary sentences. From the analysis, we identified seven summarizing strategies and their rules which are then transformed into a set of heuristic rules on how to determine the summarizing strategies. We developed an algorithm based on the heuristic rules and performed some experiments to evaluate and support the technique proposed.

Experimental Study on Machinability of Laser- Sintered Material in Ball End Milling

This paper presents an experimental investigation on the machinability of laser-sintered material using small ball end mill focusing on wear mechanisms. Laser-sintered material was produced by irradiating a laser beam on a layer of loose fine SCM-Ni-Cu powder. Bulk carbon steel JIS S55C was selected as a reference steel. The effects of powder consolidation mechanisms and unsintered powder on the tool life and wear mechanisms were carried out. Results indicated that tool life in cutting laser-sintered material is lower than that in cutting JIS S55C. Adhesion of the work material and chipping were the main wear mechanisms of the ball end mill in cutting laser-sintered material. Cutting with the unsintered powder surrounding the tool and laser-sintered material had caused major fracture on the cutting edge.

Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production

This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.

Entrepreneurial Challenges Confronting Micro Enterprise of Malaysian Malays

This research focuses on micro-enterprise of Malaysian Malays that are involved in very small-scaled business activities. Among them include food stall and burger stall operators, night market hawkers, grocery store operators as well as construction and small service activities works. The study seeks to explore why some micro-entrepreneurs still lag in entrepreneurship and what needs to be rectified. This quantitative study is conducted on 173 Malay micro-enterprise owners (MEOs) and 58 Malay failed microenterprise owners (FMEOs) involved in all range of businesses throughout the state of Perak, Malaysia. The main aims are to identify the gaps between the failed micro-enterprise owners (FMEOs) and existing micro-enterprise owners (MEOs) and the problems faced among FMEOs. The results reveal that the MEOs had strong motivations and better marketing approaches as compared to FMEOs. Furthermore, the FMEOs failed in the business ventures mainly due to lack of management, sales and marketing skills and poor competitive abilities to keep up with rivals.

Fuzzy Approach for Ranking of Motor Vehicles Involved in Road Accidents

Increasing number of vehicles and lack of awareness among road users may lead to road accidents. However no specific literature was found to rank vehicles involved in accidents based on fuzzy variables of road users. This paper proposes a ranking of four selected motor vehicles involved in road accidents. Human and non-human factors that normally linked with road accidents are considered for ranking. The imprecision or vagueness inherent in the subjective assessment of the experts has led the application of fuzzy sets theory to deal with ranking problems. Data in form of linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. The Multi Criteria Decision Making, fuzzy TOPSIS was applied in computational procedures. From the analysis, it shows that motorcycles vehicles yielded the highest closeness coefficient at 0.6225. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the motorcycles recorded the first rank.

Generating Qualitative Causal Graph using Modeling Constructs of Qualitative Process Theory for Explaining Organic Chemistry Reactions

This paper discusses the causal explanation capability of QRIOM, a tool aimed at supporting learning of organic chemistry reactions. The development of the tool is based on the hybrid use of Qualitative Reasoning (QR) technique and Qualitative Process Theory (QPT) ontology. Our simulation combines symbolic, qualitative description of relations with quantity analysis to generate causal graphs. The pedagogy embedded in the simulator is to both simulate and explain organic reactions. Qualitative reasoning through a causal chain will be presented to explain the overall changes made on the substrate; from initial substrate until the production of final outputs. Several uses of the QPT modeling constructs in supporting behavioral and causal explanation during run-time will also be demonstrated. Explaining organic reactions through causal graph trace can help improve the reasoning ability of learners in that their conceptual understanding of the subject is nurtured.

Selecting Negative Examples for Protein-Protein Interaction

Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.

Forecasting the Istanbul Stock Exchange National 100 Index Using an Artificial Neural Network

Many studies have shown that Artificial Neural Networks (ANN) have been widely used for forecasting financial markets, because of many financial and economic variables are nonlinear, and an ANN can model flexible linear or non-linear relationship among variables. The purpose of the study was to employ an ANN models to predict the direction of the Istanbul Stock Exchange National 100 Indices (ISE National-100). As a result of this study, the model forecast the direction of the ISE National-100 to an accuracy of 74, 51%.

Development of EN338 (2009) Strength Classes for Some Common Nigerian Timber Species Using Three Point Bending Test

The work presents a development of EN338 strength classes for Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum Nigerian timber species. The specimens for experimental measurements were obtained from the timber-shed at the famous Panteka market in Kaduna in the northern part of Nigeria. Laboratory experiments were conducted to determine the physical and mechanical properties of the selected timber species in accordance with EN 13183-1 and ASTM D193. The mechanical properties were determined using three point bending test. The generated properties were used to obtain the characteristic values of the material properties in accordance with EN384. The selected timber species were then classified according to EN 338. Strombosia pustulata, Pterygotama crocarpa, Nauclea diderrichii and Entandrophragma cyclindricum were assigned to strength classes D40, C14, D40 and D24 respectively. Other properties such as tensile and compressive strengths parallel and perpendicular to grains, shear strength as well as shear modulus were obtained in accordance with EN 338. 

Effect of Co3O4 Nanoparticles Addition on (Bi,Pb)-2223 Superconductor

The effect of nano Co3O4 addition on the superconducting properties of (Bi, Pb)-2223 system was studied. The samples were prepared by the acetate coprecipitation method. The Co3O4 with different sizes (10-30 nm and 30-50 nm) from x=0.00 to 0.05 was added to Bi1.6Pb0.4Sr2Ca2Cu3Oy(Co3O4)x. Phase analysis by XRD method, microstructural examination by SEM and dc electrical resistivity by four point probe method were done to characterize the samples. The X-ray diffraction patterns of all the samples indicated the majority Bi-2223 phase along with minor Bi-2212 and Bi-2201 phases. The volume fraction was estimated from the intensities of Bi- 2223, Bi-2212 and Bi-2201 phase. The sample with x=0.01 wt% of the added Co3O4 (10-30 nm size) showed the highest volume fraction of Bi-2223 phase (72%) and the highest superconducting transition temperature, Tc (~102 K). The non-added sample showed the highest Tc(~103 K) compared to added samples with nano Co3O4 (30-50 nm size) added samples. Both the onset critical temperature Tc(onset) and zero electrical resistivity temperature Tc(R=0) were in the range of 103-115 ±1K and 91-103 ±1K respectively for samples with added Co3O4 (10-30 nm and 30-50 nm).

Characterization of the Energy Band Diagram of Fabricated SnO2/CdS/CdTe Thin Film Solar Cells

A SnO2/CdS/CdTe heterojunction was fabricated by thermal evaporation technique. The fabricated cells were annealed at 573K for periods of 60, 120 and 180 minutes. The structural properties of the solar cells have been studied by using X-ray diffraction. Capacitance- voltage measurements were studied for the as-prepared and annealed cells at a frequency of 102 Hz. The capacitance- voltage measurements indicated that these cells are abrupt. The capacitance decreases with increasing annealing time. The zero bias depletion region width and the carrier concentration increased with increasing annealing time. The carrier transport mechanism for the CdS/CdTe heterojunction in dark is tunneling recombination. The ideality factor is 1.56 and the reverse bias saturation current is 9.6×10-10A. The energy band lineup for the n- CdS/p-CdTe heterojunction was investigated using current - voltage and capacitance - voltage characteristics.

M-Learning Curriculum Design for Secondary School: A Needs Analysis

The learning society has currently transformed from 'wired society' to become 'mobile society' which is facilitated by wireless network. To suit to this new paradigm, m-learning was given birth and rapidly building its prospect to be included in the future curriculum. Research and studies on m-learning spruced up in numerous aspects but there is still scarcity in studies on curriculum design of m-learning. This study is a part of an ongoing bigger study probing into the m-learning curriculum for secondary schools. The paper reports on the first phase of the study which aims to probe into the needs of curriculum design for m-learning at the secondary school level and the researcher adopted the needs analysis method. Data accrued from responses on survey questionnaires based on Lickert-point scale were analyzed statistically. The findings from this preliminary study serve as a basis for m-learning curriculum development for secondary schools.

Earthquake Analysis of Reinforce Concrete Framed Structures with Added Viscous Dampers

This paper describes the development of a numerical finite element algorithm used for the analysis of reinforced concrete structure equipped with shakes energy absorbing device subjected to earthquake excitation. For this purpose a finite element program code for analysis of reinforced concrete frame buildings is developed. The performance of developed program code is evaluated by analyzing of a reinforced concrete frame buildings model. The results are show that using damper device as seismic energy dissipation system effectively can reduce the structural response of framed structure during earthquake occurrence.

Behavior Model Mapping and Transformation using Model-Driven Architecture

Model mapping and transformation are important processes in high level system abstractions, and form the cornerstone of model-driven architecture (MDA) techniques. Considerable research in this field has devoted attention to static system abstraction, despite the fact that most systems are dynamic with high frequency changes in behavior. In this paper we provide an overview of work that has been done with regard to behavior model mapping and transformation, based on: (1) the completeness of the platform independent model (PIM); (2) semantics of behavioral models; (3) languages supporting behavior model transformation processes; and (4) an evaluation of model composition to effect the best approach to describing large systems with high complexity.

Students- Perception of the Evaluation System in Architecture Studios

Architecture education was based on apprenticeship models and its nature has not changed much during long period but the Source of changes was its evaluation process and system. It is undeniable that art and architecture education is completely based on transmitting knowledge from instructor to students. In contrast to other majors this transmitting is by iteration and practice and studio masters try to control the design process and improving skills in the form of supervision and criticizing. Also the evaluation will end by giving marks to students- achievements. Therefore the importance of the evaluation and assessment role is obvious and it is not irrelevant to say that if we want to know about the architecture education system, we must first study its assessment procedures. The evolution of these changes in western countries has literate and documented well. However it seems that this procedure has unregarded in Malaysia and there is a severe lack of research and documentation in this area. Malaysia as an under developing and multicultural country which is involved different races and cultures is a proper origin for scrutinizing and understanding the evaluation systems and acceptability amount of current implemented models to keep the evaluation and assessment procedure abreast with needs of different generations, cultures and even genders. This paper attempts to answer the questions of how evaluation and assessments are performed and how students perceive this evaluation system in the context Malaysia. The main advantage of this work is that it contributes in international debate on evaluation model.

Dynamic Traffic Simulation for Traffic Congestion Problem Using an Enhanced Algorithm

Traffic congestion has become a major problem in many countries. One of the main causes of traffic congestion is due to road merges. Vehicles tend to move slower when they reach the merging point. In this paper, an enhanced algorithm for traffic simulation based on the fluid-dynamic algorithm and kinematic wave theory is proposed. The enhanced algorithm is used to study traffic congestion at a road merge. This paper also describes the development of a dynamic traffic simulation tool which is used as a scenario planning and to forecast traffic congestion level in a certain time based on defined parameter values. The tool incorporates the enhanced algorithm as well as the two original algorithms. Output from the three above mentioned algorithms are measured in terms of traffic queue length, travel time and the total number of vehicles passing through the merging point. This paper also suggests an efficient way of reducing traffic congestion at a road merge by analyzing the traffic queue length and travel time.

An Optimal Algorithm for HTML Page Building Process

Demand over web services is in growing with increases number of Web users. Web service is applied by Web application. Web application size is affected by its user-s requirements and interests. Differential in requirements and interests lead to growing of Web application size. The efficient way to save store spaces for more data and information is achieved by implementing algorithms to compress the contents of Web application documents. This paper introduces an algorithm to reduce Web application size based on reduction of the contents of HTML files. It removes unimportant contents regardless of the HTML file size. The removing is not ignored any character that is predicted in the HTML building process.

A New Approach to ECG Biometric Systems: A Comparitive Study between LPC and WPD Systems

In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.

A Genetic-Algorithm-Based Approach for Audio Steganography

In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.

Comparison of MFCC and Cepstral Coefficients as a Feature Set for PCG Biometric Systems

Heart sound is an acoustic signal and many techniques used nowadays for human recognition tasks borrow speech recognition techniques. One popular choice for feature extraction of accoustic signals is the Mel Frequency Cepstral Coefficients (MFCC) which maps the signal onto a non-linear Mel-Scale that mimics the human hearing. However the Mel-Scale is almost linear in the frequency region of heart sounds and thus should produce similar results with the standard cepstral coefficients (CC). In this paper, MFCC is investigated to see if it produces superior results for PCG based human identification system compared to CC. Results show that the MFCC system is still superior to CC despite linear filter-banks in the lower frequency range, giving up to 95% correct recognition rate for MFCC and 90% for CC. Further experiments show that the high recognition rate is due to the implementation of filter-banks and not from Mel-Scaling.