Anodic Growth of Highly Ordered Titanium Oxide Nanotube Arrays: Effects of Critical Anodization Factors on their Photocatalytic Activity

Highly ordered arrays of TiO2 nanotubes (TiNTs) were grown vertically on Ti foil by electrochemical anodization. We controlled the lengths of these TiNTs from 2.4 to 26.8 ¶üÇóμm while varying the water contents (1, 3, and 6 wt%) of the electrolyte in ethylene glycol in the presence of 0.5 wt% NH4F with anodization for various applied voltages (20–80 V), periods (10–240 min) and temperatures (10–30 oC). For vertically aligned TiNT arrays, not only the increase in their tube lengths, but also their geometric (wall thickness and surface roughness) and crystalline structure lead to a significant influence on photocatalytic activity. The length optimization for methylene blue (MB) photodegradation was 18 μm. Further extending the TiNT length yielded lower photocatalytic activity presumably related to the limited MB diffusion and light-penetration depth into the TiNT arrays. The results indicated that a maximum MB photodegradation rate was obtained for the discrete anatase TiO2 nanotubes with thick and rough walls.

Parallel Discrete Fourier Transform for Fast FIR Filtering Based on Overlapped-save Block Structure

To successfully provide a fast FIR filter with FTT algorithms, overlapped-save algorithms can be used to lower the computational complexity and achieve the desired real-time processing. As the length of the input block increases in order to improve the efficiency, a larger volume of zero padding will greatly increase the computation length of the FFT. In this paper, we use the overlapped block digital filtering to construct a parallel structure. As long as the down-sampling (or up-sampling) factor is an exact multiple lengths of the impulse response of a FIR filter, we can process the input block by using a parallel structure and thus achieve a low-complex fast FIR filter with overlapped-save algorithms. With a long filter length, the performance and the throughput of the digital filtering system will also be greatly enhanced.

Microstructure Parameters of a Super-Ionic Sample (Csag2i3)

Sample of CsAg2I3 was prepared by solid state reaction. Then, microstructure parameters of this sample have been determined using wide angle X-ray scattering WAXS method. As well as, Cell parameters of crystal structure have been refined using CHEKCELL program. This analysis states that the lattice intrinsic strainof the sample is so small and the crystal size is on the order of 559Å.

A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Experimental and Numerical Study of A/C Outletsand Its Impact on Room Airflow Characteristics

This paper investigates experimental and numerical study of the airflow characteristics for vortex, round and square ceiling diffusers and its effect on the thermal comfort in a ventilated room. Three different thermal comfort criteria namely; Mean Age of the Air (MAA), ventilation effectiveness (E), and Effective Draft Temperature (EDT) have been used to predict the thermal comfort zone inside the room. In experimental work, a sub-scale room is set-up to measure the temperature field in the room. In numerical analysis, unstructured grids have been used to discretize the numerical domain. Conservation equations are solved using FLUENT commercial flow solver. The code is validated by comparing the numerical results obtained from three different turbulence models with the available experimental data. The comparison between the various numerical models shows that the standard k-ε turbulence model can be used to simulate these cases successfully. After validation of the code, effect of supply air velocity on the flow and thermal field could be investigated and hence the thermal comfort. The results show that the pressure coefficient created by the square diffuser is 1.5 times greater than that created by the vortex diffuser. The velocity decay coefficient is nearly the same for square and round diffusers and is 2.6 times greater than that for the vortex diffuser.

Synthesis and Thermoelectric Behavior in Nanoparticles of Doped Co Ferrites

Samples of CoFe2-xCrxO4 where x varies from 0.0 to 0.5 were prepared by co-precipitation route. These samples were sintered at 750°C for 2 hours. These particles were characterized by X-ray diffraction (XRD) at room temperature. The FCC spinel structure was confirmed by XRD patterns of the samples. The crystallite sizes of these particles were calculated from the most intense peak by Scherrer formula. The crystallite sizes lie in the range of 37-60 nm. The lattice parameter was found decreasing upon substitution of Cr. DC electrical resistivity was measured as a function of temperature. The room temperature thermoelectric power was measured for the prepared samples. The magnitude of Seebeck coefficient depends on the composition and resistivity of the samples.

Genetic Algorithm Based Approach for Actuator Saturation Effect on Nonlinear Controllers

In the real application of active control systems to mitigate the response of structures subjected to sever external excitations such as earthquake and wind induced vibrations, since the capacity of actuators is limited then the actuators saturate. Hence, in designing controllers for linear and nonlinear structures under sever earthquakes, the actuator saturation should be considered as a constraint. In this paper optimal design of active controllers for nonlinear structures by considering the actuator saturation has been studied. To this end a method has been proposed based on defining an optimization problem which considers the minimizing of the maximum displacement of the structure as objective when a limited capacity for actuator has been used as a constraint in optimization problem. To evaluate the effectiveness of the proposed method, a single degree of freedom (SDF) structure with a bilinear hysteretic behavior has been simulated under a white noise ground acceleration of different amplitudes. Active tendon control mechanism, comprised of pre-stressed tendons and an actuator, and extended nonlinear Newmark method based instantaneous optimal control algorithm have been used as active control mechanism and algorithm. To enhance the efficiency of the controllers, the weights corresponding to displacement, velocity, acceleration and control force in the performance index have been found by using the Distributed Genetic Algorithm (DGA). According to the results it has been concluded that the proposed method has been effective in considering the actuator saturation in designing optimal controllers for nonlinear frames. Also it has been shown that the actuator capacity and the average value of required control force are two important factors in designing nonlinear controllers for considering the actuator saturation.

Neuro-Fuzzy System for Equalization Channel Distortion

In this paper the application of neuro-fuzzy system for equalization of channel distortion is considered. The structure and operation algorithm of neuro-fuzzy equalizer are described. The use of neuro-fuzzy equalizer in digital signal transmission allows to decrease training time of parameters and decrease the complexity of the network. The simulation of neuro-fuzzy equalizer is performed. The obtained result satisfies the efficiency of application of neurofuzzy technology in channel equalization.

Parallel-computing Approach for FFT Implementation on Digital Signal Processor (DSP)

An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.

A Method to Annotate Programs with High-Level Knowledge of Computation

When programming in languages such as C, Java, etc., it is difficult to reconstruct the programmer's ideas only from the program code. This occurs mainly because, much of the programmer's ideas behind the implementation are not recorded in the code during implementation. For example, physical aspects of computation such as spatial structures, activities, and meaning of variables are not required as instructions to the computer and are often excluded. This makes the future reconstruction of the original ideas difficult. AIDA, which is a multimedia programming language based on the cyberFilm model, can solve these problems allowing to describe ideas behind programs using advanced annotation methods as a natural extension to programming. In this paper, a development environment that implements the AIDA language is presented with a focus on the annotation methods. In particular, an actual scientific numerical computation code is created and the effects of the annotation methods are analyzed.

A Competitiveness Analysis of the Convention Tourism of China's Macao Special Administrative Region

This paper explored the use of Importance- Performance Analysis in assessing the competitiveness of China-s Macao Special Administrative Region as a city for international conventions. Determinants of destination choice for convention tourists are grouped under three factors, namely the convention factor, the city factor and the tourism factor. Attributes of these three factors were studied through a survey with the convention participants and exhibitors of Macao SAR. Results indicate that the city boasts of strong traditional tourist attractions and infrastructure, but is deficient in specialized convention experts and promotion mechanisms. A reflection on the findings suggests that an urban city such as the Macao SAR can co-develop its the convention and the traditional tourism for a synergistic effect. With proper planning and co-ordination, both areas of the city-s tourism industry will grow as they feed off each other.

A Dictionary Learning Method Based On EMD for Audio Sparse Representation

Sparse representation has long been studied and several dictionary learning methods have been proposed. The dictionary learning methods are widely used because they are adaptive. In this paper, a new dictionary learning method for audio is proposed. Signals are at first decomposed into different degrees of Intrinsic Mode Functions (IMF) using Empirical Mode Decomposition (EMD) technique. Then these IMFs form a learned dictionary. To reduce the size of the dictionary, the K-means method is applied to the dictionary to generate a K-EMD dictionary. Compared to K-SVD algorithm, the K-EMD dictionary decomposes audio signals into structured components, thus the sparsity of the representation is increased by 34.4% and the SNR of the recovered audio signals is increased by 20.9%.

Powerful Tool to Expand Business Intelligence: Text Mining

With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.

Pollution Induced Community Tolerance(PICT) of Microorganisms in Soil Incubated with Different Levels of PB

Soil microbial activity is adversely affected by pollutants such as heavy metals, antibiotics and pesticides. Organic amendments including sewage sludge, municipal compost and vermicompost are recently used to improve soil structure and fertility. But, these materials contain heavy metals including Pb, Cd, Zn, Ni and Cu that are toxic to soil microorganisms and may lead to occurrence of more tolerant microbes. Among these, Pb is the most abundant and has more negative effect on soil microbial ecology. In this study, Pb levels of 0, 100, 200, 300, 400 and 500 mg Pb [as Pb(NO3)2] per kg soil were added to the pots containing 2 kg of a loamy soil and incubated for 6 months at 25°C with soil moisture of - 0.3 MPa. Dehydrogenase activity of soil as a measure of microbial activity was determined on 15, 30, 90 and 180 days after incubation. Triphenyl tetrazolium chloride (TTC) was used as an electron acceptor in this assay. PICTs (€IC50 values) were calculated for each Pb level and incubation time. Soil microbial activity was decreased by increasing Pb level during 30 days of incubation but the induced tolerance appeared on day 90 and thereafter. During 90 to 180 days of incubation, the PICT was gradually developed by increasing Pb level up to 200 mg kg-1, but the rate of enhancement was steeper at higher concentrations.

Biologically Inspired Artificial Neural Cortex Architecture and its Formalism

The paper attempts to elucidate the columnar structure of the cortex by answering the following questions. (1) Why the cortical neurons with similar interests tend to be vertically arrayed forming what is known as cortical columns? (2) How to describe the cortex as a whole in concise mathematical terms? (3) How to design efficient digital models of the cortex?

mCRM-s New Opportunities of Customer Satisfaction

This paper aims at a new challenge of customer satisfaction on mobile customer relationship management. In this paper presents a conceptualization of mCRM on its unique characteristics of customer satisfaction. Also, this paper develops an empirical framework in conception of customer satisfaction in mCRM. A single-case study is applied as the methodology. In order to gain an overall view of the empirical case, this paper accesses to invisible and important information of company in this investigation. Interview is the key data source form the main informants of the company through which the issues are identified and the proposed framework is built. It supports the development of customer satisfaction in mCRM; links this theoretical framework into practice; and provides the direction for future research. Therefore, this paper is very useful for the industries as it helps them to understand how customer satisfaction changes the mCRM structure and increase the business competitive advantage. Finally, this paper provides a contribution in practice by linking a theoretical framework in conception of customer satisfaction in mCRM for companies to a practical real case.

Educational Robotics Constructivism and Modeling of Robots using Reverse Engineering

The project describes the modeling of various architectures mechatronics specifically morphologies of robots in an educational environment. Each structure developed by students of pre-school, primary and secondary was created using the concept of reverse engineering in a constructivist environment, to later be integrated in educational software that promotes the teaching of educational Robotics in a virtual and economic environment.

Unsupervised Texture Classification and Segmentation

An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure of the data. This improves classification accuracy compared with standard Gaussian mixture models. When applied to textures, the algorithm can learn basis functions for images that capture the statistically significant structure intrinsic in the images. We apply this technique to the problem of unsupervised texture classification and segmentation.

Numerical Simulation of a Conventional Heat Pipe

The steady incompressible flow has been solved in cylindrical coordinates in both vapour region and wick structure. The governing equations in vapour region are continuity, Navier-Stokes and energy equations. These equations have been solved using SIMPLE algorithm. For study of parameters variation on heat pipe operation, a benchmark has been chosen and the effect of changing one parameter has been analyzed when the others have been fixed.

Agent-Based Simulation and Analysis of Network-Centric Air Defense Missile Systems

Network-Centric Air Defense Missile Systems (NCADMS) represents the superior development of the air defense missile systems and has been regarded as one of the major research issues in military domain at present. Due to lack of knowledge and experience on NCADMS, modeling and simulation becomes an effective approach to perform operational analysis, compared with those equation based ones. However, the complex dynamic interactions among entities and flexible architectures of NCADMS put forward new requirements and challenges to the simulation framework and models. ABS (Agent-Based Simulations) explicitly addresses modeling behaviors of heterogeneous individuals. Agents have capability to sense and understand things, make decisions, and act on the environment. They can also cooperate with others dynamically to perform the tasks assigned to them. ABS proves an effective approach to explore the new operational characteristics emerging in NCADMS. In this paper, based on the analysis of network-centric architecture and new cooperative engagement strategies for NCADMS, an agent-based simulation framework by expanding the simulation framework in the so-called System Effectiveness Analysis Simulation (SEAS) was designed. The simulation framework specifies components, relationships and interactions between them, the structure and behavior rules of an agent in NCADMS. Based on scenario simulations, information and decision superiority and operational advantages in NCADMS were analyzed; meanwhile some suggestions were provided for its future development.