Effect of Oxygen on Biochar Yield and Properties

Air infiltration in mass scale industrial applications of bio char production is inevitable. The presence of oxygen during the carbonization process is detrimental to the production of biochar yield and properties. The experiment was carried out on several wood species in a fixed-bed pyrolyser under various fractions of oxygen ranging from 0% to 11% by varying nitrogen and oxygen composition in the pyrolysing gas mixtures at desired compositions. The bed temperature and holding time were also varied. Process optimization was carried out by Response Surface Methodology (RSM) by employing Central Composite Design (CCD) using Design Expert 6.0 Software. The effect of oxygen ratio and holding time on biochar yield within the range studied were statistically significant. From the analysis result, optimum condition of 15.2% biochar yield of mangrove wood was predicted at pyrolysis temperature of 403 oC, oxygen percentage of 2.3% and holding time of two hours. This prediction agreed well with the experiment finding of 15.1% biochar yield.

Discrimination of Seismic Signals Using Artificial Neural Networks

The automatic discrimination of seismic signals is an important practical goal for earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, two classes of seismic signals recorded routinely in geophysical laboratory of the National Center for Scientific and Technical Research in Morocco are considered. They correspond to signals associated to local earthquakes and chemical explosions. The approach adopted for the development of an automatic discrimination system is a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "modified Mexican hat wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.

Hydrothermal Alteration Zones Identification Based on Remote Sensing Data in the Mahin Area, West of Qazvin Province, Iran

The Mahin area is a part of Tarom- Hashtjin zone that located in west of Qazvin province in northwest of Iran. Many copper and base metals ore deposits are hosted by this zone. High potential localities identification in this area is very necessary. The objective of this research, is finding hydrothermal alteration zones by remote sensing methods and best processing technique of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data. Different methods such as band ratio, Principal Component Analysis (PCA), Minimum Noise Fraction (MNF) and Least Square Fit (LS-Fit) were used for mapping hydrothermal alteration zones.

Development of Manufacturing Simulation Model for Semiconductor Fabrication

This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.

Performance Assessment and Optimization of the After-Sale Networks

The after–sales activities are nowadays acknowledged as a relevant source of revenue, profit and competitive advantage in most manufacturing industries. Top and middle management, therefore, should focus on the definition of a structured business performance measurement system for the after-sales business. The paper aims at filling this gap, and presents an integrated methodology for the after-sales network performance measurement, and provides an empirical application to automotive case companies and their official service network. This is the first study that presents an integrated multivariate approach for total assessment and improvement of after-sale services.

An Efficient Hamiltonian for Discrete Fractional Fourier Transform

Fractional Fourier Transform, which is a generalization of the classical Fourier Transform, is a powerful tool for the analysis of transient signals. The discrete Fractional Fourier Transform Hamiltonians have been proposed in the past with varying degrees of correlation between their eigenvectors and Hermite Gaussian functions. In this paper, we propose a new Hamiltonian for the discrete Fractional Fourier Transform and show that the eigenvectors of the proposed matrix has a higher degree of correlation with the Hermite Gaussian functions. Also, the proposed matrix is shown to give better Fractional Fourier responses with various transform orders for different signals.

Photodegradation of Phenol Red in the Presence of ZnO Nanoparticles

In our recent study, we have used ZnO nanoparticles assisted with UV light irradiation to investigate the photocatalytic degradation of Phenol Red (PR). The ZnO photocatalyst was characterized by X-ray diffraction (XRD), transmission electron microscopy (TEM), specific surface area analysis (BET) and UVvisible spectroscopy. X-ray diffractometry result for the ZnO nanoparticles exhibit normal crystalline phase features. All observed peaks can be indexed to the pure hexagonal wurtzite crystal structures, with the space group of P63mc. There are no other impurities in the diffraction peak. In addition, TEM measurement shows that most of the nanoparticles are rod-like and spherical in shape and fairly monodispersed. A significant degradation of the PR was observed when the catalyst was added into the solution even without the UV light exposure. In addition, the photodegradation increases with the photocatalyst loading. The surface area of the ZnO nanomaterials from the BET measurement was 11.9 m2/g. Besides the photocatalyst loading, the effect of some parameters on the photodegradation efficiency such as initial PR concentration and pH were also studied.

Performance of Block Codes Using the Eigenstructure of the Code Correlation Matrixand Soft-Decision Decoding of BPSK

A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.

Word Stemming Algorithms and Retrieval Effectiveness in Malay and Arabic Documents Retrieval Systems

Documents retrieval in Information Retrieval Systems (IRS) is generally about understanding of information in the documents concern. The more the system able to understand the contents of documents the more effective will be the retrieval outcomes. But understanding of the contents is a very complex task. Conventional IRS apply algorithms that can only approximate the meaning of document contents through keywords approach using vector space model. Keywords may be unstemmed or stemmed. When keywords are stemmed and conflated in retrieving process, we are a step forwards in applying semantic technology in IRS. Word stemming is a process in morphological analysis under natural language processing, before syntactic and semantic analysis. We have developed algorithms for Malay and Arabic and incorporated stemming in our experimental systems in order to measure retrieval effectiveness. The results have shown that the retrieval effectiveness has increased when stemming is used in the systems.

Detection of Bias in GPS satellites- Measurements for Enhanced Measurement Integrity

In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.

Prediction of Reusability of Object Oriented Software Systems using Clustering Approach

In literature, there are metrics for identifying the quality of reusable components but the framework that makes use of these metrics to precisely predict reusability of software components is still need to be worked out. These reusability metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the software component and hence improve the productivity due to probabilistic increase in the reuse level. As CK metric suit is most widely used metrics for extraction of structural features of an object oriented (OO) software; So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the OO software component and decision tree is formed for the 10-fold cross validation of data to evaluate the in terms of linguistic reusability value of the component. The developed reusability model has produced high precision results as desired.

Effect of Peak-to-Average Power Ratio Reduction on the Multicarrier Communication System Performance Parameters

Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. However the price paid for this high spectral efficiency and less intensive equalization is low power efficiency. OFDM signals are very sensitive to nonlinear effects due to the high Peak-to-Average Power Ratio (PAPR), which leads to the power inefficiency in the RF section of the transmitter. This paper investigates the effect of PAPR reduction on the performance parameter of multicarrier communication system. Performance parameters considered are power consumption of Power Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier efficiency, SNR of DAC and BER performance of the system. From our analysis it is found that irrespective of PAPR reduction technique being employed, the power consumption of PA and DAC reduces and power amplifier efficiency increases due to reduction in PAPR. Moreover, it has been shown that for a given BER performance the requirement of Input-Backoff (IBO) reduces with reduction in PAPR.

Islamic Corporate Social Responsibility, Corporate Reputation and Performance

This study examines the effect of Islamic Corporate Social Responsibility disclosure and on corporate reputation as well as performance. These relationships are examined based on content analysis of of annual reports of 17 Islamic banks in Malaysia for 2008, 2009 and 2010. Results of this study provide evidence that CSR activities communicated in corporate annual reports are significantly positively related with corporate reputation as well as firm performance. These results indicate that CSR activities and disclosure from Islamic perspectives are equally important business strategies in creating continuous superior performance for organisations. In addition, it also highlights that organisations need to develop a stakeholder orientation particularly in an environment of increasing pressure from jurisdictions dominated by Islamic stakeholders on organisations engaging in Islamic products to increase their social responsibilities from the Islamic perspectives.

Input Textural Feature Selection By Mutual Information For Multispectral Image Classification

Texture information plays increasingly an important role in remotely sensed imagery classification and many pattern recognition applications. However, the selection of relevant textural features to improve this classification accuracy is not a straightforward task. This work investigates the effectiveness of two Mutual Information Feature Selector (MIFS) algorithms to select salient textural features that contain highly discriminatory information for multispectral imagery classification. The input candidate features are extracted from a SPOT High Resolution Visible(HRV) image using Wavelet Transform (WT) at levels (l = 1,2). The experimental results show that the selected textural features according to MIFS algorithms make the largest contribution to improve the classification accuracy than classical approaches such as Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA).

Sparse Networks-Based Speedup Technique for Proteins Betweenness Centrality Computation

The study of proteomics reached unexpected levels of interest, as a direct consequence of its discovered influence over some complex biological phenomena, such as problematic diseases like cancer. This paper presents the latest authors- achievements regarding the analysis of the networks of proteins (interactome networks), by computing more efficiently the betweenness centrality measure. The paper introduces the concept of betweenness centrality, and then describes how betweenness computation can help the interactome net- work analysis. Current sequential implementations for the between- ness computation do not perform satisfactory in terms of execution times. The paper-s main contribution is centered towards introducing a speedup technique for the betweenness computation, based on modified shortest path algorithms for sparse graphs. Three optimized generic algorithms for betweenness computation are described and implemented, and their performance tested against real biological data, which is part of the IntAct dataset.

Data Mining Applied to the Predictive Model of Triage System in Emergency Department

The Emergency Department of a medical center in Taiwan cooperated to conduct the research. A predictive model of triage system is contracted from the contract procedure, selection of parameters to sample screening. 2,000 pieces of data needed for the patients is chosen randomly by the computer. After three categorizations of data mining (Multi-group Discriminant Analysis, Multinomial Logistic Regression, Back-propagation Neural Networks), it is found that Back-propagation Neural Networks can best distinguish the patients- extent of emergency, and the accuracy rate can reach to as high as 95.1%. The Back-propagation Neural Networks that has the highest accuracy rate is simulated into the triage acuity expert system in this research. Data mining applied to the predictive model of the triage acuity expert system can be updated regularly for both the improvement of the system and for education training, and will not be affected by subjective factors.

Dynamic Meshing for Material Point Method Computations

This paper presents strategies for dynamically creating, managing and removing mesh cells during computations in the context of the Material Point Method (MPM). The dynamic meshing approach has been developed to help address problems involving motion of a finite size body in unbounded domains in which the extent of material travel and deformation is unknown a priori, such as in the case of landslides and debris flows. The key idea is to efficiently instantiate and search only cells that contain material points, thereby avoiding unneeded storage and computation. Mechanisms for doing this efficiently are presented, and example problems are used to demonstrate the effectiveness of dynamic mesh management relative to alternative approaches.

A Graphical Environment for Petri Nets INA Tool Based on Meta-Modelling and Graph Grammars

The Petri net tool INA is a well known tool by the Petri net community. However, it lacks a graphical environment to cerate and analyse INA models. Building a modelling tool for the design and analysis from scratch (for INA tool for example) is generally a prohibitive task. Meta-Modelling approach is useful to deal with such problems since it allows the modelling of the formalisms themselves. In this paper, we propose an approach based on the combined use of Meta-modelling and Graph Grammars to automatically generate a visual modelling tool for INA for analysis purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of INA models. The meta-modelling tool ATOM3 is used to generate a visual modelling tool according to the proposed INA meta-model. We have also proposed a graph grammar to automatically generate INA description of the graphically specified Petri net models. This allows the user to avoid the errors when this description is done manually. Then the INA tool is used to perform the simulation and the analysis of the resulted INA description. Our environment is illustrated through an example.

Research on Simulation Model of Collision Force between Floating Ice and Pier

Adopting the measured constitutive relationship of stress-strain of river ice, the finite element analysis model of percussive force of river ice and pier is established, by the explicit dynamical analysis software package LS-DYNA. Effects of element types, contact method and arithmetic of ice and pier, coupled modes between different elements, mesh density of pier, and ice sheet in contact area on the collision force are studied. Some of measures for the collision force analysis of river ice and pier are proposed as follows: bridge girder can adopt beam161 element with 3-node; pier below the line of 1.30m above ice surface and ice sheet use solid164 element with 8-node; in order to accomplish the connection of different elements, the rigid body with 0.01-0.05m thickness is defined between solid164 and beam161; the contact type of ice and pier adopts AUTOMATIC_SURFACE_TO_SURFACE, using symmetrical penalty function algorithms; meshing size of pier below the line of 1.30m above ice surface should not less than 0.25×0.25×0.5m3. The simulation results have the advantage of high precision by making a comparison between measured and computed data. The research results can be referred for collision force study between river ice and pier.

Computer Aided Detection on Mammography

A typical definition of the Computer Aided Diagnosis (CAD), found in literature, can be: A diagnosis made by a radiologist using the output of a computerized scheme for automated image analysis as a diagnostic aid. Often it is possible to find the expression Computer Aided Detection (CAD or CADe): this definition emphasizes the intent of CAD to support rather than substitute the human observer in the analysis of radiographic images. In this article we will illustrate the application of CAD systems and the aim of these definitions. Commercially available CAD systems use computerized algorithms for identifying suspicious regions of interest. In this paper are described the general CAD systems as an expert system constituted of the following components: segmentation / detection, feature extraction, and classification / decision making. As example, in this work is shown the realization of a Computer- Aided Detection system that is able to assist the radiologist in identifying types of mammary tumor lesions. Furthermore this prototype of station uses a GRID configuration to work on a large distributed database of digitized mammographic images.