The Energy Impacts of Using Top-Light Daylighting Systems for Academic Buildings in Tropical Climate

Careful design and selection of daylighting systems can greatly help in reducing not only artificial lighting use, but also decrease cooling energy consumption and, therefore, potential for downsizing air-conditioning systems. This paper aims to evaluate the energy performance of two types of top-light daylighting systems due to the integration of daylight together with artificial lighting in an existing examinaton hall in University Kebangsaan Malaysia, based on a hot and humid climate. Computer simulation models have been created for building case study (base case) and the two types of toplight daylighting designs for building energy performance evaluation using the VisualDOE 4.0 building energy simulation program. The finding revealed that daylighting through top-light systems is a very beneficial design strategy in reducing annual lighting energy consumption and the overall total annual energy consumption.

Trajectory-Based Modified Policy Iteration

This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).

Multistage Condition Monitoring System of Aircraft Gas Turbine Engine

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Analytical Investigation of Sediment Formation and Transport in the Vicinity of the Water Intake Structures - A Case Study of the Dez Diversion Weir in Greater Dezful

Sedimentation process resulting from soil erosion in the water basin especially in arid and semi-arid where poor vegetation cover in the slope of the mountains upstream could contribute to sediment formation. The consequence of sedimentation not only makes considerable change in the morphology of the river and the hydraulic characteristics but would also have a major challenge for the operation and maintenance of the canal network which depend on water flow to meet the stakeholder-s requirements. For this reason mathematical modeling can be used to simulate the effective factors on scouring, sediment transport and their settling along the waterways. This is particularly important behind the reservoirs which enable the operators to estimate the useful life of these hydraulic structures. The aim of this paper is to simulate the sedimentation and erosion in the eastern and western water intake structures of the Dez Diversion weir using GSTARS-3 software. This is done to estimate the sedimentation and investigate the ways in which to optimize the process and minimize the operational problems. Results indicated that the at the furthest point upstream of the diversion weir, the coarser sediment grains tended to settle. The reason for this is the construction of the phantom bridge and the outstanding rocks just upstream of the structure. The construction of these along the river course has reduced the momentum energy require to push the sediment loads and make it possible for them to settle wherever the river regime allows it. Results further indicated a trend for the sediment size in such a way that as the focus of study shifts downstream the size of grains get smaller and vice versa. It was also found that the finding of the GSTARS-3 had a close proximity with the sets of the observed data. This suggests that the software is a powerful analytical tool which can be applied in the river engineering project with a minimum of costs and relatively accurate results.

Olive Leaves Extract Restored the antioxidant Perturbations in Red Blood Cells Hemolysate in Streptozotocin Induced Diabetic Rats

Oxidative stress and overwhelming free radicals associated with diabetes mellitus are likely to be linked with development of certain complication such as retinopathy, nephropathy and neuropathy. Treatment of diabetic subjects with antioxidant may be of advantage in attenuating these complications. Olive leaf (Oleaeuropaea), has been endowed with many beneficial and health promoting properties mostly linked to its antioxidant activity. This study aimed to evaluate the significance of supplementation of Olive leaves extract (OLE) in reducing oxidative stress, hyperglycemia and hyperlipidemia in Sterptozotocin (STZ)- induced diabetic rats. After induction of diabetes, a significant rise in plasma glucose, lipid profiles except High density lipoproteincholestrol (HDLc), malondialdehyde (MDA) and significant decrease of plasma insulin, HDLc and Plasma reduced glutathione GSH as well as alteration in enzymatic antioxidants was observed in all diabetic animals. During treatment of diabetic rats with 0.5g/kg body weight of Olive leaves extract (OLE) the levels of plasma (MDA) ,(GSH), insulin, lipid profiles along with blood glucose and erythrocyte enzymatic antioxidant enzymes were significantly restored to establish values that were not different from normal control rats. Untreated diabetic rats on the other hand demonstrated persistent alterations in the oxidative stress marker (MDA), blood glucose, insulin, lipid profiles and the antioxidant parameters. These results demonstrate that OLE may be of advantage in inhibiting hyperglycemia, hyperlipidemia and oxidative stress induced by diabetes and suggest that administration of OLE may be helpful in the prevention or at least reduced of diabetic complications associated with oxidative stress.

STLF Based on Optimized Neural Network Using PSO

The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

A Study on Reducing Malicious Replies on the Internet: An Approach by Game Theory

Since the advent of the information era, the Internet has brought various positive effects in everyday life. Nevertheless, recently, problems and side-effects have been noted. Internet witch-trials and spread of pornography are only a few of these problems.In this study, problems and causes of malicious replies on internet boards were analyzed, using the key ideas of game theory. The study provides a mathematical model for the internet reply game to devise three possible plans that could efficiently counteract malicious replies. Furthermore, seven specific measures that comply with one of the three plans were proposed and evaluated according to the importance and utility of each measure using the orthogonal array survey and SPSS conjoint analysis.The conclusion was that the most effective measure would be forbidding unsigned user access to malicious replies. Also notable was that some analytically proposed measures, when implemented, could backfire and encourage malicious replies.

On Discretization of Second-order Derivatives in Smoothed Particle Hydrodynamics

Discretization of spatial derivatives is an important issue in meshfree methods especially when the derivative terms contain non-linear coefficients. In this paper, various methods used for discretization of second-order spatial derivatives are investigated in the context of Smoothed Particle Hydrodynamics. Three popular forms (i.e. "double summation", "second-order kernel derivation", and "difference scheme") are studied using one-dimensional unsteady heat conduction equation. To assess these schemes, transient response to a step function initial condition is considered. Due to parabolic nature of the heat equation, one can expect smooth and monotone solutions. It is shown, however in this paper, that regardless of the type of kernel function used and the size of smoothing radius, the double summation discretization form leads to non-physical oscillations which persist in the solution. Also, results show that when a second-order kernel derivative is used, a high-order kernel function shall be employed in such a way that the distance of inflection point from origin in the kernel function be less than the nearest particle distance. Otherwise, solutions may exhibit oscillations near discontinuities unlike the "difference scheme" which unconditionally produces monotone results.

Machining Parameters Optimization of Developed Yttria Stabilized Zirconia Toughened Alumina Ceramic Inserts While Machining AISI 4340 Steel

An attempt has been made to investigate the machinability of zirconia toughened alumina (ZTA) inserts while turning AISI 4340 steel. The insert was prepared by powder metallurgy process route and the machining experiments were performed based on Response Surface Methodology (RSM) design called Central Composite Design (CCD). The mathematical model of flank wear, cutting force and surface roughness have been developed using second order regression analysis. The adequacy of model has been carried out based on Analysis of variance (ANOVA) techniques. It can be concluded that cutting speed and feed rate are the two most influential factor for flank wear and cutting force prediction. For surface roughness determination, the cutting speed & depth of cut both have significant contribution. Key parameters effect on each response has also been presented in graphical contours for choosing the operating parameter preciously. 83% desirability level has been achieved using this optimized condition.

PoPCoRN: A Power-Aware Periodic Surveillance Scheme in Convex Region using Wireless Mobile Sensor Networks

In this paper, the periodic surveillance scheme has been proposed for any convex region using mobile wireless sensor nodes. A sensor network typically consists of fixed number of sensor nodes which report the measurements of sensed data such as temperature, pressure, humidity, etc., of its immediate proximity (the area within its sensing range). For the purpose of sensing an area of interest, there are adequate number of fixed sensor nodes required to cover the entire region of interest. It implies that the number of fixed sensor nodes required to cover a given area will depend on the sensing range of the sensor as well as deployment strategies employed. It is assumed that the sensors to be mobile within the region of surveillance, can be mounted on moving bodies like robots or vehicle. Therefore, in our scheme, the surveillance time period determines the number of sensor nodes required to be deployed in the region of interest. The proposed scheme comprises of three algorithms namely: Hexagonalization, Clustering, and Scheduling, The first algorithm partitions the coverage area into fixed sized hexagons that approximate the sensing range (cell) of individual sensor node. The clustering algorithm groups the cells into clusters, each of which will be covered by a single sensor node. The later determines a schedule for each sensor to serve its respective cluster. Each sensor node traverses all the cells belonging to the cluster assigned to it by oscillating between the first and the last cell for the duration of its life time. Simulation results show that our scheme provides full coverage within a given period of time using few sensors with minimum movement, less power consumption, and relatively less infrastructure cost.

Design and Implementation of Real-Time Automatic Censoring System on Chip for Radar Detection

Design and implementation of a novel B-ACOSD CFAR algorithm is presented in this paper. It is proposed for detecting radar target in log-normal distribution environment. The BACOSD detector is capable to detect automatically the number interference target in the reference cells and detect the real target by an adaptive threshold. The detector is implemented as a System on Chip on FPGA Altera Stratix II using parallelism and pipelining technique. For a reference window of length 16 cells, the experimental results showed that the processor works properly with a processing speed up to 115.13MHz and processing time0.29 ┬Ás, thus meets real-time requirement for a typical radar system.

Autonomous Virtual Agent Navigation in Virtual Environments

This paper presents a solution for the behavioural animation of autonomous virtual agent navigation in virtual environments. We focus on using Dempster-Shafer-s Theory of Evidence in developing visual sensor for virtual agent. The role of the visual sensor is to capture the information about the virtual environment or identifie which part of an obstacle can be seen from the position of the virtual agent. This information is require for vitual agent to coordinate navigation in virtual environment. The virual agent uses fuzzy controller as a navigation system and Fuzzy α - level for the action selection method. The result clearly demonstrates the path produced is reasonably smooth even though there is some sharp turn and also still not diverted too far from the potential shortest path. This had indicated the benefit of our method, where more reliable and accurate paths produced during navigation task.

Analysis of the Elastic Scattering of 12C on 11B at Energy near Coulomb Barrier Using Different Optical Potential Codes

the aim of that work is to study the proton transfer phenomenon which takes place in the elastic scattering of 12C on 11B at energies near the coulomb barrier. This reaction was studied at four different energies 16, 18, 22, 24 MeV. The experimental data of the angular distribution at these energies were compared to the calculation prediction using the optical potential codes such as ECIS88 and SPIVAL. For the raising in the cross section at backward angles due to the transfer process we could use Distorted Wave Born Approximation (DWUCK5). Our analysis showed that SPIVAL code with l-dependent imaginary potential could be used effectively.

E-health in Rural Areas: Case of Developing Countries

The Application of e-health solutions has brought superb advancements in the health care industry. E-health solutions have already been embraced in the industrialized countries. In an effort to catch up with the growth, the developing countries have strived to revolutionize the healthcare industry by use of Information technology in different ways. Based on a technology assessment carried out in Kenya – one of the developing countries – and using multiple case studies in Nyanza Province, this work focuses on an investigation on how five rural hospitals are adapting to the technology shift. The issues examined include the ICT infrastructure and e-health technologies in place, the knowledge of participants in terms of benefits gained through the use of ICT and the challenges posing barriers to the use of ICT technologies in these hospitals. The results reveal that the ICT infrastructure in place is inadequate for e-health implementations as a result to various challenges that exist. Consequently, suggestions on how to tackle the various challenges have been addressed in this paper.

Non-Parametric Histogram-Based Thresholding Methods for Weld Defect Detection in Radiography

In non destructive testing by radiography, a perfect knowledge of the weld defect shape is an essential step to appreciate the quality of the weld and make decision on its acceptability or rejection. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of thresholding methods must be done judiciously. In this paper, performance criteria are used to conduct a comparative study of four non parametric histogram thresholding methods for automatic extraction of weld defect in radiographic images.

Comparison of Different Types of Sources of Traffic Using SFQ Scheduling Discipline

In this paper, SFQ (Start Time Fair Queuing) algorithm is analyzed when this is applied in computer networks to know what kind of behavior the traffic in the net has when different data sources are managed by the scheduler. Using the NS2 software the computer networks were simulated to be able to get the graphs showing the performance of the scheduler. Different traffic sources were introduced in the scripts, trying to establish the real scenario. Finally the results were that depending on the data source, the traffic can be affected in different levels, when Constant Bite Rate is applied, the scheduler ensures a constant level of data sent and received, but the truth is that in the real life it is impossible to ensure a level that resists the changes in work load.

Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG

In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.

The Effect of Transformer’s Vector Group on Retained Voltage Magnitude and Sag Frequency at Industrial Sites Due to Faults

This paper deals with the effect of a power transformer’s vector group on the basic voltage sag characteristics during unbalanced faults at a meshed or radial power network. Specifically, the propagation of voltage sags through a power transformer is studied with advanced short-circuit analysis. A smart method to incorporate this effect on analytical mathematical expressions is proposed. Based on this methodology, the positive effect of transformers of certain vector groups on the mitigation of the expected number of voltage sags per year (sag frequency) at the terminals of critical industrial customers can be estimated.

A New Pattern for Handwritten Persian/Arabic Digit Recognition

The main problem for recognition of handwritten Persian digits using Neural Network is to extract an appropriate feature vector from image matrix. In this research an asymmetrical segmentation pattern is proposed to obtain the feature vector. This pattern can be adjusted as an optimum model thanks to its one degree of freedom as a control point. Since any chosen algorithm depends on digit identity, a Neural Network is used to prevail over this dependence. Inputs of this Network are the moment of inertia and the center of gravity which do not depend on digit identity. Recognizing the digit is carried out using another Neural Network. Simulation results indicate the high recognition rate of 97.6% for new introduced pattern in comparison to the previous models for recognition of digits.

Monotonicity of Dependence Concepts from Independent Random Vector into Dependent Random Vector

When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.