Solar Radiation Studies for Dubai and Sharjah, UAE

Global Solar Radiation (H) for Dubai and Sharjah, Latitude 25.25oN, Longitude 55oE and 25.29oN, Longitude 55oE respectively have been studied using sunshine hour data (n) of the areas using various methods. These calculated global solar radiation values are then compared to the measured values presented by NASA. Furthermore, the extraterrestrial (H0), diffuse (Hd) and beam radiation (Hb) are also calculated. The diffuse radiation is calculated using methods proposed by Page and Liu and Jordan (L-J). Diffuse Radiation from the Page method is higher than the L-J method. Moreover, the clearness index (KT) signifies a clear sky almost all year round. Rainy days are hardly a few in a year and limited in the months December to March. The temperature remains between 25oC in winter to 44oC in summer and is desirable for thermal applications of solar energy. From the estimated results, it appears that solar radiation can be utilized very efficiently throughout the year for photovoltaic and thermal applications.

Splitting Modified Donor-Cell Schemes for Spectral Action Balance Equation

The spectral action balance equation is an equation that used to simulate short-crested wind-generated waves in shallow water areas such as coastal regions and inland waters. This equation consists of two spatial dimensions, wave direction, and wave frequency which can be solved by finite difference method. When this equation with dominating propagation velocity terms are discretized using central differences, stability problems occur when the grid spacing is chosen too coarse. In this paper, we introduce the splitting modified donorcell scheme for avoiding stability problems and prove that it is consistent to the modified donor-cell scheme with same accuracy. The splitting modified donor-cell scheme was adopted to split the wave spectral action balance equation into four one-dimensional problems, which for each small problem obtains the independently tridiagonal linear systems. For each smaller system can be solved by direct or iterative methods at the same time which is very fast when performed by a multi-cores computer.

Anodic Growth of Highly Ordered Titanium Oxide Nanotube Arrays: Effects of Critical Anodization Factors on their Photocatalytic Activity

Highly ordered arrays of TiO2 nanotubes (TiNTs) were grown vertically on Ti foil by electrochemical anodization. We controlled the lengths of these TiNTs from 2.4 to 26.8 ¶üÇóμm while varying the water contents (1, 3, and 6 wt%) of the electrolyte in ethylene glycol in the presence of 0.5 wt% NH4F with anodization for various applied voltages (20–80 V), periods (10–240 min) and temperatures (10–30 oC). For vertically aligned TiNT arrays, not only the increase in their tube lengths, but also their geometric (wall thickness and surface roughness) and crystalline structure lead to a significant influence on photocatalytic activity. The length optimization for methylene blue (MB) photodegradation was 18 μm. Further extending the TiNT length yielded lower photocatalytic activity presumably related to the limited MB diffusion and light-penetration depth into the TiNT arrays. The results indicated that a maximum MB photodegradation rate was obtained for the discrete anatase TiO2 nanotubes with thick and rough walls.

Accelerated Microwave Extraction of Natural Product using the Cryogrinding

Team distillation assisted by microwave extraction (SDAM) considered as accelerated technique extraction is a combination of microwave heating and steam distillation, performed at atmospheric pressure. SDAM has been compared with the same technique coupled with the cryogrinding of seeds (SDAM -CG). Isolation and concentration of volatile compounds are performed by a single stage for the extraction of essential oil from Cuminum cyminum seeds. The essential oils extracted by these two methods for 5 min were quantitatively (yield) and qualitatively (aromatic profile) no similar. These methods yield an essential oil with higher amounts of more valuable oxygenated compounds, and allow substantial savings of costs, in terms of time, energy and plant material. SDAM and SDAM-CG is a green technology and appears as a good alternative for the extraction of essential oils from aromatic plants.

Generalized Method for Estimating Best-Fit Vertical Alignments for Profile Data

When the profile information of an existing road is missing or not up-to-date and the parameters of the vertical alignment are needed for engineering analysis, the engineer has to recreate the geometric design features of the road alignment using collected profile data. The profile data may be collected using traditional surveying methods, global positioning systems, or digital imagery. This paper develops a method that estimates the parameters of the geometric features that best characterize the existing vertical alignments in terms of tangents and the expressions of the curve, that may be symmetrical, asymmetrical, reverse, and complex vertical curves. The method is implemented using an Excel-based optimization method that minimizes the differences between the observed profile and the profiles estimated from the equations of the vertical curve. The method uses a 'wireframe' representation of the profile that makes the proposed method applicable to all types of vertical curves. A secondary contribution of this paper is to introduce the properties of the equal-arc asymmetrical curve that has been recently developed in the highway geometric design field.

Color Image Segmentation and Multi-Level Thresholding by Maximization of Conditional Entropy

In this work a novel approach for color image segmentation using higher order entropy as a textural feature for determination of thresholds over a two dimensional image histogram is discussed. A similar approach is applied to achieve multi-level thresholding in both grayscale and color images. The paper discusses two methods of color image segmentation using RGB space as the standard processing space. The threshold for segmentation is decided by the maximization of conditional entropy in the two dimensional histogram of the color image separated into three grayscale images of R, G and B. The features are first developed independently for the three ( R, G, B ) spaces, and combined to get different color component segmentation. By considering local maxima instead of the maximum of conditional entropy yields multiple thresholds for the same image which forms the basis for multilevel thresholding.

Rapid Determination of Biochemical Oxygen Demand

Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.

Adaptive Helmholtz Resonator in a Hydraulic System

An adaptive Helmholtz resonator was designed and adapted to hydraulics. The resonator was controlled by open- and closed-loop controls so that 20 dB attenuation of the peak-to-peak value of the pulsating pressure was maintained. The closed-loop control was noted to be better, albeit it was slower because of its low pressure and temperature variation, which caused variation in the effective bulk modulus of the hydraulic system. Low-pressure hydraulics contains air, which affects the stiffness of the hydraulics, and temperature variation changes the viscosity of the oil. Thus, an open-loop control loses its efficiency if a condition such as temperature or the amount of air changes after calibration. The instability of the low-pressure hydraulic system reduced the operational frequency range of the Helmholtz resonator when compared with the results of an analytical model. Different dampers for hydraulics are presented. Then analytical models of a hydraulic pipe and a hydraulic pipe with a Helmholtz resonator are presented. The analytical models are based on the wave equation of sound pressure. Finally, control methods and the results of experiments are presented.

High Dynamic Range Resampling for Software Radio

The classic problem of recovering arbitrary values of a band-limited signal from its samples has an added complication in software radio applications; namely, the resampling calculations inevitably fold aliases of the analog signal back into the original bandwidth. The phenomenon is quantified by the spur-free dynamic range. We demonstrate how a novel application of the Remez (Parks- McClellan) algorithm permits optimal signal recovery and SFDR, far surpassing state-of-the-art resamplers.

Parallel-computing Approach for FFT Implementation on Digital Signal Processor (DSP)

An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.

Biochemical Characteristics of Sorghum Flour Fermented and/or Supplemented with Chickpea Flour

Sorghum flour was supplemented with 15 and 30% chickpea flour. Sorghum flour and the supplement were fermented at 35 oC for 0, 8, 16, and 24 h. Changes in pH, titrable acidity, total soluble solids, protein content, in vitro protein digestibility and amino acid composition were investigated during fermentation and/or after supplementation of sorghum flour with chickpea. The pH of the fermenting material decreased sharply with a concomitant increase in the titrable acidity. The total soluble solids remained unchanged with progressive fermentation time. The protein content of sorghum cultivar was found to be 9.27 and that of chickpea was 22.47%. The protein content of sorghum cultivar after supplementation with15 and 30% chickpea was significantly (P ≤ 0.05) increased to 11.78 and 14.55%, respectively. The protein digestibility also increased after fermentation from 13.35 to 30.59 and 40.56% for the supplements, respectively. Further increment in protein content and digestibility was observed when supplemented and unsupplemented samples were fermented for different periods of time. Cooking of fermented samples was found to increase the protein content slightly and decreased digestibility for both supplements. Amino acid content of fermented and fermented and cooked supplements was determined. Supplementation was found to increase the lysine and therionine content. Cooking following fermentation decreased lysine, isoleucine, valine and sulfur containg amino acids.

Numerical Study of Microscale Gas Flow-Separation Using Explicit Finite Volume Method

Pressure driven microscale gas flow-separation has been investigated by solving the compressible Navier-Stokes (NS) system of equations. A two dimensional explicit finite volume (FV) compressible flow solver has been developed using modified advection upwind splitting methods (AUSM+) with no-slip/first order Maxwell-s velocity slip conditions to predict the flowseparation behavior in microdimensions. The effects of scale-factor of the flow geometry and gas species on the microscale gas flowseparation have been studied in this work. The intensity of flowseparation gets reduced with the decrease in scale of the flow geometry. In reduced dimension, flow-separation may not at all be present under similar flow conditions compared to the larger flow geometry. The flow-separation patterns greatly depend on the properties of the medium under similar flow conditions.

A Method to Annotate Programs with High-Level Knowledge of Computation

When programming in languages such as C, Java, etc., it is difficult to reconstruct the programmer's ideas only from the program code. This occurs mainly because, much of the programmer's ideas behind the implementation are not recorded in the code during implementation. For example, physical aspects of computation such as spatial structures, activities, and meaning of variables are not required as instructions to the computer and are often excluded. This makes the future reconstruction of the original ideas difficult. AIDA, which is a multimedia programming language based on the cyberFilm model, can solve these problems allowing to describe ideas behind programs using advanced annotation methods as a natural extension to programming. In this paper, a development environment that implements the AIDA language is presented with a focus on the annotation methods. In particular, an actual scientific numerical computation code is created and the effects of the annotation methods are analyzed.

Using Automatic Ontology Learning Methods in Human Plausible Reasoning Based Systems

Knowledge discovery from text and ontology learning are relatively new fields. However their usage is extended in many fields like Information Retrieval (IR) and its related domains. Human Plausible Reasoning based (HPR) IR systems for example need a knowledge base as their underlying system which is currently made by hand. In this paper we propose an architecture based on ontology learning methods to automatically generate the needed HPR knowledge base.

A Direct Probabilistic Optimization Method for Constrained Optimal Control Problem

A new stochastic algorithm called Probabilistic Global Search Johor (PGSJ) has recently been established for global optimization of nonconvex real valued problems on finite dimensional Euclidean space. In this paper we present convergence guarantee for this algorithm in probabilistic sense without imposing any more condition. Then, we jointly utilize this algorithm along with control parameterization technique for the solution of constrained optimal control problem. The numerical simulations are also included to illustrate the efficiency and effectiveness of the PGSJ algorithm in the solution of control problems.

A Dictionary Learning Method Based On EMD for Audio Sparse Representation

Sparse representation has long been studied and several dictionary learning methods have been proposed. The dictionary learning methods are widely used because they are adaptive. In this paper, a new dictionary learning method for audio is proposed. Signals are at first decomposed into different degrees of Intrinsic Mode Functions (IMF) using Empirical Mode Decomposition (EMD) technique. Then these IMFs form a learned dictionary. To reduce the size of the dictionary, the K-means method is applied to the dictionary to generate a K-EMD dictionary. Compared to K-SVD algorithm, the K-EMD dictionary decomposes audio signals into structured components, thus the sparsity of the representation is increased by 34.4% and the SNR of the recovered audio signals is increased by 20.9%.

On Preprocessing of Speech Signals

Preprocessing of speech signals is considered a crucial step in the development of a robust and efficient speech or speaker recognition system. In this paper, we present some popular statistical outlier-detection based strategies to segregate the silence/unvoiced part of the speech signal from the voiced portion. The proposed methods are based on the utilization of the 3 σ edit rule, and the Hampel Identifier which are compared with the conventional techniques: (i) short-time energy (STE) based methods, and (ii) distribution based methods. The results obtained after applying the proposed strategies on some test voice signals are encouraging.

Improvement in Power Transformer Intelligent Dissolved Gas Analysis Method

Non-Destructive evaluation of in-service power transformer condition is necessary for avoiding catastrophic failures. Dissolved Gas Analysis (DGA) is one of the important methods. Traditional, statistical and intelligent DGA approaches have been adopted for accurate classification of incipient fault sources. Unfortunately, there are not often enough faulty patterns required for sufficient training of intelligent systems. By bootstrapping the shortcoming is expected to be alleviated and algorithms with better classification success rates to be obtained. In this paper the performance of an artificial neural network, K-Nearest Neighbour and support vector machine methods using bootstrapped data are detailed and shown that while the success rate of the ANN algorithms improves remarkably, the outcome of the others do not benefit so much from the provided enlarged data space. For assessment, two databases are employed: IEC TC10 and a dataset collected from reported data in papers. High average test success rate well exhibits the remarkable outcome.

Performance Analysis of MT Evaluation Measures and Test Suites

Many measures have been proposed for machine translation evaluation (MTE) while little research has been done on the performance of MTE methods. This paper is an effort for MTE performance analysis. A general frame is proposed for the description of the MTE measure and the test suite, including whether the automatic measure is consistent with human evaluation, whether different results from various measures or test suites are consistent, whether the content of the test suite is suitable for performance evaluation, the degree of difficulty of the test suite and its influence on the MTE, the relationship of MTE result significance and the size of the test suite, etc. For a better clarification of the frame, several experiment results are analyzed relating human evaluation, BLEU evaluation, and typological MTE. A visualization method is introduced for better presentation of the results. The study aims for aid in construction of test suite and method selection in MTE practice.

Two DEA Based Ant Algorithms for CMS Problems

This paper considers a multi criteria cell formation problem in Cellular Manufacturing System (CMS). Minimizing the number of voids and exceptional elements in cells simultaneously are two proposed objective functions. This problem is an Np-hard problem according to the literature, and therefore, we can-t find the optimal solution by an exact method. In this paper we developed two ant algorithms, Ant Colony Optimization (ACO) and Max-Min Ant System (MMAS), based on Data Envelopment Analysis (DEA). Both of them try to find the efficient solutions based on efficiency concept in DEA. Each artificial ant is considered as a Decision Making Unit (DMU). For each DMU we considered two inputs, the values of objective functions, and one output, the value of one for all of them. In order to evaluate performance of proposed methods we provided an experimental design with some empirical problem in three different sizes, small, medium and large. We defined three different criteria that show which algorithm has the best performance.