Influence of Raw Materials Ratio and Sintering Temperature on the Properties of the Refractory Mullite-Corundum Ceramics

The alumosilicate ceramics with mullite crystalline phase are used in various branches of science and technique. The mullite refractory ceramics with high porosity serve as a heat insulator and as a constructional materials [1], [2]. The purpose of the work was to sinter high porosity ceramic and to increase the quantity of mullite phase in this mullite, mullite-corundum ceramics. Two types of compositions were prepared at during the experiment. The first type is compositions with commercial alumina and silica oxides. The second type is from mixing these oxides with 10, 20 and 30 wt.%. of kaolin. In all samples the Al2O3 and SiO2 were in 2.57:1 ratio, because that was conformed to mullite stechiometric compositions (3Al2O3.2SiO2). The types of alumina oxides were α-Al2O3 (d50=4µm) and γ-Al2O3 (d50=80µm). Ratios of α-: γ-Al2O3 were (1:1) or (1:3). The porous materials were prepared by slip casting of suspension of raw materials. The aluminium paste (0.18 wt.%) was used as a pore former. Water content in the suspensions was 26-47 wt.%. Pore formation occurred as a result of hydrogen formation in chemical reaction between aluminium paste and water [2]. The samples were sintered at the temperature of 1650°C and 1750°C for one hour. The increasing amount of kaolin, α-: γ-Al2O3 at the ratio (1:3) and sintering at the highest temperature raised the quantity of mullite phase. The mullite phase began to dominate over the corundum phase.

Flux Cored Arc Welding Parameter Optimization of AISI 316L (N) Austenitic Stainless Steel

Bead-on-plate welds were carried out on AISI 316L (N) austenitic stainless steel (ASS) using flux cored arc welding (FCAW) process. The bead on plates weld was conducted as per L25 orthogonal array. In this paper, the weld bead geometry such as depth of penetration (DOP), bead width (BW) and weld reinforcement (R) of AISI 316L (N) ASS are investigated. Taguchi approach is used as statistical design of experiment (DOE) technique for optimizing the selected welding input parameters. Grey relational analysis and desirability approach are applied to optimize the input parameters considering multiple output variables simultaneously. Confirmation experiment has also been conducted to validate the optimized parameters.

Policy Management Framework for Managing Enterprise Policies

Policy management in organizations became rising issue in the last decade. It’s because of today’s regulatory requirements in the organizations. To manage policies in large organizations is an imperative work. However, major challenges facing organizations in the last decade is managing all the policies in the organization and making them an active documents rather than simple (inactive) documents stored in computer hard drive or on a shelf. Because of this challenge, organizations need policy management program. This policy management program can be either manual or automated. This paper presents suggestions towards managing policies in organizations. As well as possible policy management solution or program to be utilized, manual or automated. The research first examines the models and frameworks used for managing policies from various perspectives in the literature of the research area/domain. At the end of this paper, a policy management framework is proposed for managing enterprise policies effectively and in a simplified manner.

Construction of Intersection of Nondeterministic Finite Automata using Z Notation

Functionalities and control behavior are both primary requirements in design of a complex system. Automata theory plays an important role in modeling behavior of a system. Z is an ideal notation which is used for describing state space of a system and then defining operations over it. Consequently, an integration of automata and Z will be an effective tool for increasing modeling power for a complex system. Further, nondeterministic finite automata (NFA) may have different implementations and therefore it is needed to verify the transformation from diagrams to a code. If we describe formal specification of an NFA before implementing it, then confidence over transformation can be increased. In this paper, we have given a procedure for integrating NFA and Z. Complement of a special type of NFA is defined. Then union of two NFAs is formalized after defining their complements. Finally, formal construction of intersection of NFAs is described. The specification of this relationship is analyzed and validated using Z/EVES tool.

A Method for Iris Recognition Based on 1D Coiflet Wavelet

There have been numerous implementations of security system using biometric, especially for identification and verification cases. An example of pattern used in biometric is the iris pattern in human eye. The iris pattern is considered unique for each person. The use of iris pattern poses problems in encoding the human iris. In this research, an efficient iris recognition method is proposed. In the proposed method the iris segmentation is based on the observation that the pupil has lower intensity than the iris, and the iris has lower intensity than the sclera. By detecting the boundary between the pupil and the iris and the boundary between the iris and the sclera, the iris area can be separated from pupil and sclera. A step is taken to reduce the effect of eyelashes and specular reflection of pupil. Then the four levels Coiflet wavelet transform is applied to the extracted iris image. The modified Hamming distance is employed to measure the similarity between two irises. This research yields the identification success rate of 84.25% for the CASIA version 1.0 database. The method gives an accuracy of 77.78% for the left eyes of MMU 1 database and 86.67% for the right eyes. The time required for the encoding process, from the segmentation until the iris code is generated, is 0.7096 seconds. These results show that the accuracy and speed of the method is better than many other methods.

Comparing Spontaneous Hydrolysis Rates of Activated Models of DNA and RNA

This research project aims to investigate difference in relative rates concerning phosphoryl transfer relevant to biological catalysis of DNA and RNA in the pH-independent reactions. Activated Models of DNA and RNA for alkyl-aryl phosphate diesters (with 4-nitrophenyl as a good leaving group) have successfully been prepared to gather kinetic parameters. Eyring plots for the pH– independent hydrolysis of 1 and 2 were established at different temperatures in the range 100–160 °C. These measurements have been used to provide a better estimate for the difference in relative rates between the reactivity of DNA and RNA cleavage. Eyring plot gave an extrapolated rate of kH2O = 1 × 10-10 s -1 for 1 (RNA model) and 2 (DNA model) at 25°C. Comparing the reactivity of RNA model and DNA model shows that the difference in relative rates in the pH-independent reactions is surprisingly very similar at 25°. This allows us to obtain chemical insights into how biological catalysts such as enzymes may have evolved to perform their current functions.

Classification Control for Discrimination between Interictal Epileptic and Non – Epileptic Pathological EEG Events

In this study, the problem of discriminating between interictal epileptic and non- epileptic pathological EEG cases, which present episodic loss of consciousness, investigated. We verify the accuracy of the feature extraction method of autocross-correlated coefficients which extracted and studied in previous study. For this purpose we used in one hand a suitable constructed artificial supervised LVQ1 neural network and in other a cross-correlation technique. To enforce the above verification we used a statistical procedure which based on a chi- square control. The classification and the statistical results showed that the proposed feature extraction is a significant accurate method for diagnostic discrimination cases between interictal and non-interictal EEG events and specifically the classification procedure showed that the LVQ neural method is superior than the cross-correlation one.

Effect of Information System Strategies on Supply Chain Strategies and Supply Chain Performance

In order to achieve competitive advantage and better performance of firm, supply chain management (SCM) strategy should support and drive forward business strategy. It means that supply chain should be aligned with business strategy, at the same time supply chain (SC) managers need to use appropriate information system (IS) solution to support their strategy, which would lead to stay competitive. There are different kinds of IS strategies which enable managers to meet the SC requirement by selecting the best IS strategy. Therefore, it is important to align IS strategies and practices with SC strategies and practices, which could help us to plan for an IS application that supports and enhances a SCMS. In this study, aligning IS with SC in strategy level is considered. The main aim of this paper is to align the various IS strategies with SCM strategies and demonstrate their impact on SC and firm performance.

The Performance Analysis of CSS-based Communication Systems in the Jamming Environment

Due to its capability to resist jamming signals, chirp spread spectrum (CSS) technique has attracted much attention in the area of wireless communications. However, there has been little rigorous analysis for the performance of the CSS communication system in jamming environments. In this paper, we present analytic results on the performance of a CSS system by deriving symbol error rate (SER) expressions for a CSS M-ary phase shift keying (MPSK) system in the presence of broadband and tone jamming signals, respectively. The numerical results show that the empirical SER closely agrees with the analytic result.

The Traditional Malay Textile (TMT)Knowledge Model: Transformation towards Automated Mapping

The growing interest on national heritage preservation has led to intensive efforts on digital documentation of cultural heritage knowledge. Encapsulated within this effort is the focus on ontology development that will help facilitate the organization and retrieval of the knowledge. Ontologies surrounding cultural heritage domain are related to archives, museum and library information such as archaeology, artifacts, paintings, etc. The growth in number and size of ontologies indicates the well acceptance of its semantic enrichment in many emerging applications. Nowadays, there are many heritage information systems available for access. Among others is community-based e-museum designed to support the digital cultural heritage preservation. This work extends previous effort of developing the Traditional Malay Textile (TMT) Knowledge Model where the model is designed with the intention of auxiliary mapping with CIDOC CRM. Due to its internal constraints, the model needs to be transformed in advance. This paper addresses the issue by reviewing the previous harmonization works with CIDOC CRM as exemplars in refining the facets in the model particularly involving TMT-Artifact class. The result is an extensible model which could lead to a common view for automated mapping with CIDOC CRM. Hence, it promotes integration and exchange of textile information especially batik-related between communities in e-museum applications.

Noise Estimation for Speech Enhancement in Non-Stationary Environments-A New Method

This paper presents a new method for estimating the nonstationary noise power spectral density given a noisy signal. The method is based on averaging the noisy speech power spectrum using time and frequency dependent smoothing factors. These factors are adjusted based on signal-presence probability in individual frequency bins. Signal presence is determined by computing the ratio of the noisy speech power spectrum to its local minimum, which is updated continuously by averaging past values of the noisy speech power spectra with a look-ahead factor. This method adapts very quickly to highly non-stationary noise environments. The proposed method achieves significant improvements over a system that uses voice activity detector (VAD) in noise estimation.

Impact of the Amendments of Malaysian Code of Corporate Governance (2007) on Governance of GLCs and Performance

The study aims to investigate the impact on board and audit committee characteristics and firm performance before and after the revision of MCCG (2007) on GLCs over the period 2005-2010. We used Return on Assets (ROA) as a proxy for firm performance. The data consists of two groups; data collected before and after the amendments of MCCG (2007). Findings show that boards of directors with accounting / finance qualifications (BEXP) are statistically significant with performance for period before the amendments. As for audit committee members with accounting or finance qualifications (ACEXP), correlation results indicate a negative association and non-significant results for the years before amendments. However, the years after the amendments show positive relationship with highly significant correlations (1%) to ROA. This indicates that the amendments of MCCG 2007 on the audit committee members- literacy in accounting have impacted the governance structures and performance of GLCs.

A New Empirical Expression of the Breakdown Voltage for Combined Variations of Temperature and Pressure

In aircraft applications, according to the nature of electrical equipment its location may be in unpressurized area or very close to the engine; thus, the environmental conditions may change from atmospheric pressure to less than 100 mbar, and the temperature may be higher than the ambient one as in most real working conditions of electrical equipment. Then, the classical Paschen curve has to be replotted since these parameters may affect the discharge ignition voltage. In this paper, we firstly investigate the domain of validity of two corrective expressions on the Paschen-s law found in the literature, in case of changing the air environment and known as Peek and Dunbar corrections. Results show that these corrections are no longer valid for combined variation of temperature and pressure. After that, a new empirical expression for breakdown voltage is proposed and is validated in the case of combined variations of temperature and pressure.

Performance Analysis of Evolutionary ANN for Output Prediction of a Grid-Connected Photovoltaic System

This paper presents performance analysis of the Evolutionary Programming-Artificial Neural Network (EPANN) based technique to optimize the architecture and training parameters of a one-hidden layer feedforward ANN model for the prediction of energy output from a grid connected photovoltaic system. The ANN utilizes solar radiation and ambient temperature as its inputs while the output is the total watt-hour energy produced from the grid-connected PV system. EP is used to optimize the regression performance of the ANN model by determining the optimum values for the number of nodes in the hidden layer as well as the optimal momentum rate and learning rate for the training. The EPANN model is tested using two types of transfer function for the hidden layer, namely the tangent sigmoid and logarithmic sigmoid. The best transfer function, neural topology and learning parameters were selected based on the highest regression performance obtained during the ANN training and testing process. It is observed that the best transfer function configuration for the prediction model is [logarithmic sigmoid, purely linear].

An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks

Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.

The Number of Rational Points on Elliptic Curves and Circles over Finite Fields

In elliptic curve theory, number of rational points on elliptic curves and determination of these points is a fairly important problem. Let p be a prime and Fp be a finite field and k ∈ Fp. It is well known that which points the curve y2 = x3 + kx has and the number of rational points of on Fp. Consider the circle family x2 + y2 = r2. It can be interesting to determine common points of these two curve families and to find the number of these common points. In this work we study this problem.

Analysis of Meteorological Drought in the Ruhr Basin by Using the Standardized Precipitation Index

Drought is one of the most damaging climate-related hazards, it is generally considered as a prolonged absence of precipitation. This normal and recurring climate phenomenon had plagued civilization throughout history because of the negative impacts on economical, environmental and social sectors. Drought characteristics are thus recognized as important factors in water resources planning and management. The purpose of this study is to detect the changes in drought frequency, persistence and severity in the Ruhr river basin. The frequency of drought events was calculated using the Standardized Precipitation Index (SPI). Used data are daily precipitation records from seven meteorological stations covering the period 1961-2007. The main benefit of the application of this index is its versatility, only rainfall data is required to deliver five major dimensions of a drought : duration, intensity, severity, magnitude, and frequency. Furthermore, drought can be calculated in different time steps. In this study SPI was calculated for 1, 3, 6, 9, 12, and 24 months. Several drought events were detected in the covered period, these events contain mild, moderate and severe droughts. Also positive and negative trends in the SPI values were observed.

Characterization of Antioxidant Peptides of Soybean Protein Hydrolysate

In order to characterize the soy protein hydrolysate obtained in this study, gel chromatography on Sephadex G-25 was used to perform the separation of the peptide mixture and electrophoresis in SDS-polyacrylamide gel has been employed. Protein hydrolysate gave high antioxidant activities, but didn't give any antimicrobial activities. The antioxidant activities of protein hydrolysate was in the same trend of peptide content which gave high antioxidant activities and high peptide content between fractions 15 to 50. With increasing peptide concentrations, the scavenging effect on DPPH radical increased until about 70%, thereafter reaching a plateau. In compare to different concentrations of BHA, which exhibited higher activity (90%), soybean protein hydrolysate exhibited high antioxidant activities (70%) at a concentration of 1.45 mg/ml at fraction 25. Electrophoresis analysis indicated that, low- MW hydrolysate fractions (F1) appeared, on average, to have higher DPPH scavenging activities than high-MW fractions. These results revealed that soybean peptides probably contain substances that were proton donors and could react with free radicals to convert them to stable diamagnetic molecules. 

A Feasible Path Selection QoS Routing Algorithm with two Constraints in Packet Switched Networks

Over the past several years, there has been a considerable amount of research within the field of Quality of Service (QoS) support for distributed multimedia systems. One of the key issues in providing end-to-end QoS guarantees in packet networks is determining a feasible path that satisfies a number of QoS constraints. The problem of finding a feasible path is NPComplete if number of constraints is more than two and cannot be exactly solved in polynomial time. We proposed Feasible Path Selection Algorithm (FPSA) that addresses issues with pertain to finding a feasible path subject to delay and cost constraints and it offers higher success rate in finding feasible paths.

K-Means for Spherical Clusters with Large Variance in Sizes

Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.