Hydrogen Generation by Accelerating Aluminum Corrosion in Water with Alumina

For relatively small particles of aluminum (5%) is observed to corrode before passivation occurs at moderate temperatures (>50oC) in de-ionized water within one hour. Physical contact with alumina powder results in a significant increase in both the rate of corrosion and the extent of corrosion before passivation. Whereas the resulting release of hydrogen gas could be of commercial interest for portable hydrogen supply systems, the fundamental aspects of Al corrosion acceleration in presence of dispersed alumina particles are equally important. This paper investigates the effects of various amounts of alumina on the corrosion rate of aluminum powders in water and the effect of multiple additions of aluminum into a single reactor.

A Robust Extrapolation Method for Curtailed Aperture Reconstruction in Acoustic Imaging

Acoustic Imaging based sound localization using microphone array is a challenging task in digital-signal processing. Discrete Fourier transform (DFT) based near-field acoustical holography (NAH) is an important acoustical technique for sound source localization and provide an efficient solution to the ill-posed problem. However, in practice, due to the usage of small curtailed aperture and its consequence of significant spectral leakage, the DFT could not reconstruct the active-region-of-sound (AROS) effectively, especially near the edges of aperture. In this paper, we emphasize the fundamental problems of DFT-based NAH, provide a solution to spectral leakage effect by the extrapolation based on linear predictive coding and 2D Tukey windowing. This approach has been tested to localize the single and multi-point sound sources. We observe that incorporating extrapolation technique increases the spatial resolution, localization accuracy and reduces spectral leakage when small curtail aperture with a lower number of sensors accounts.

Trispectral Analysis of Voiced Sounds Defective Audition and Tracheotomisian Cases

This paper presents the cepstral and trispectral analysis of a speech signal produced by normal men, men with defective audition (deaf, deep deaf) and others affected by tracheotomy, the trispectral analysis based on parametric methods (Autoregressive AR) using the fourth order cumulant. These analyses are used to detect and compare the pitches and the formants of corresponding voiced sounds (vowel \a\, \i\ and \u\). The first results appear promising, since- it seems after several experimentsthere is no deformation of the spectrum as one could have supposed it at the beginning, however these pathologies influenced the two characteristics: The defective audition influences to the formants contrary to the tracheotomy, which influences the fundamental frequency (pitch).

Assessing the Relation between Theory of Multiple Algebras and Universal Algebras

In this study, we examine multiple algebras and algebraic structures derived from them and by stating a theory on multiple algebras; we will show that the theory of multiple algebras is a natural extension of the theory of universal algebras. Also, we will treat equivalence relations on multiple algebras, for which the quotient constructed modulo them is a universal algebra and will study the basic relation and the fundamental algebra in question. In this study, by stating the characteristic theorem of multiple algebras, we show that the theory of multiple algebras is a natural extension of the theory of universal algebras.

Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

Arms detection is one of the fundamental problems in human motion analysis application. The arms are considered as the most challenging body part to be detected since its pose and speed varies in image sequences. Moreover, the arms are usually occluded with other body parts such as the head and torso. In this paper, histogram-based skin colour segmentation is proposed to detect the arms in image sequences. Six different colour spaces namely RGB, rgb, HSI, TSL, SCT and CIELAB are evaluated to determine the best colour space for this segmentation procedure. The evaluation is divided into three categories, which are single colour component, colour without luminance and colour with luminance. The performance is measured using True Positive (TP) and True Negative (TN) on 250 images with manual ground truth. The best colour is selected based on the highest TN value followed by the highest TP value.

Mobile Phone as a Tool for Data Collection in Field Research

The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.

A New Measure of Herding Behavior: Derivation and Implications

If price and quantity are the fundamental building blocks of any theory of market interactions, the importance of trading volume in understanding the behavior of financial markets is clear. However, while many economic models of financial markets have been developed to explain the behavior of prices -predictability, variability, and information content- far less attention has been devoted to explaining the behavior of trading volume. In this article, we hope to expand our understanding of trading volume by developing a new measure of herding behavior based on a cross sectional dispersion of volumes betas. We apply our measure to the Toronto stock exchange using monthly data from January 2000 to December 2002. Our findings show that the herd phenomenon consists of three essential components: stationary herding, intentional herding and the feedback herding.

Assessment of EU Competitiveness Factors by Multivariate Methods

Measurement of competitiveness between countries or regions is an important topic of many economic analysis and scientific papers. In European Union (EU), there is no mainstream approach of competitiveness evaluation and measuring. There are many opinions and methods of measurement and evaluation of competitiveness between states or regions at national and European level. The methods differ in structure of using the indicators of competitiveness and ways of their processing. The aim of the paper is to analyze main sources of competitive potential of the EU Member States with the help of Factor analysis (FA) and to classify the EU Member States to homogeneous units (clusters) according to the similarity of selected indicators of competitiveness factors by Cluster analysis (CA) in reference years 2000 and 2011. The theoretical part of the paper is devoted to the fundamental bases of competitiveness and the methodology of FA and CA methods. The empirical part of the paper deals with the evaluation of competitiveness factors in the EU Member States and cluster comparison of evaluated countries by cluster analysis. 

Adaptive Path Planning for Mobile Robot Obstacle Avoidance

Generally speaking, the mobile robot is capable of sensing its surrounding environment, interpreting the sensed information to obtain the knowledge of its location and the environment, planning a real-time trajectory to reach the object. In this process, the issue of obstacle avoidance is a fundamental topic to be challenged. Thus, an adaptive path-planning control scheme is designed without detailed environmental information, large memory size and heavy computation burden in this study for the obstacle avoidance of a mobile robot. In this scheme, the robot can gradually approach its object according to the motion tracking mode, obstacle avoidance mode, self-rotation mode, and robot state selection. The effectiveness of the proposed adaptive path-planning control scheme is verified by numerical simulations of a differential-driving mobile robot under the possible occurrence of obstacle shapes.

Transformation of Vocal Characteristics: A Review of Literature

The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future.

Value of Sharing: Viral Advertisement

Sharing motivations of viral advertisements by consumers and the impacts of these advertisements on the perceptions for brand will be questioned in this study. Three fundamental questions are answered in the study. These are advertisement watching and sharing motivations of individuals, criteria of liking viral advertisement and the impact of individual attitudes for viral advertisement on brand perception respectively. This study will be carried out via a viral advertisement which was practiced in Turkey. The data will be collected by survey method and the sample of the study consists of individuals who experienced the practice of sample advertisement. Data will be collected by online survey method and will be analyzed by using SPSS statistical package program. Recently traditional advertisement mind have been changing. New advertising approaches which have significant impacts on consumers have been argued. Viral advertising is a modernist advertisement mind which offers significant advantages to brands apart from traditional advertising channels such as television, radio and magazines. Viral advertising also known as Electronic Word-of- Mouth (eWOM) consists of free spread of convincing messages sent by brands among interpersonal communication. When compared to the traditional advertising, a more provocative thematic approach is argued. The foundation of this approach is to create advertisements that are worth sharing with others by consumers. When that fact is taken into consideration, in a manner of speaking it can also be stated that viral advertising is media engineering. The content worth sharing makes people being a volunteer spokesman of a brand and strengthens the emotional bonds among brand and consumer. Especially for some sectors in countries which are having traditional advertising channel limitations, viral advertising creates vital advantages.

Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation

Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.

An Introduction to the Concept of University – Community Business Continuity Management for Disaster Resilient City

The fundamental objective of the university is to genuinely provide a higher education to mankind and society. Higher education institutions earn billions of dollars in research funds, granted by national government or related institutions, which literally came from taxpayers. Everyday universities consume those grants; in return, provide society with a human resource and research developments. However, not all taxpayers have their major concerns on those researches, other than that they are more curiously to see the project being build tangibly and evidently to certify what they pay for. This paper introduces the concept of University – Community Business Continuity Management for Disaster – Resilient City, which modified the concept of Business Continuity Management (BCM) toward university community to create advancing collaboration leading to the disaster – resilient community and city. This paper focuses on describing in details the backgrounds and principles of the concept and discussing the advantages and limitations of the concept.

A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

An Examination and Validation of the Theoretical Resistivity-Temperature Relationship for Conductors

Electrical resistivity is a fundamental parameter of metals or electrical conductors. Since resistivity is a function of temperature, in order to completely understand the behavior of metals, a temperature dependent theoretical model is needed. A model based on physics principles has recently been developed to obtain an equation that relates electrical resistivity to temperature. This equation is dependent upon a parameter associated with the electron travel time before being scattered, and a parameter that relates the energy of the atoms and their separation distance. Analysis of the energy parameter reveals that the equation is optimized if the proportionality term in the equation is not constant but varies over the temperature range. Additional analysis reveals that the theoretical equation can be used to determine the mean free path of conduction electrons, the number of defects in the atomic lattice, and the ‘equivalent’ charge associated with the metallic bonding of the atoms. All of this analysis provides validation for the theoretical model and provides insight into the behavior of metals where performance is affected by temperatures (e.g., integrated circuits and temperature sensors).

Kinetics of Polyethylene Terephthalate (PET)and Polystyrene (PS) Dynamic Pyrolysis

Thermo-chemical treatment (TCT) such as pyrolysis is getting recognized as a valid route for (i) materials and valuable products and petrochemicals recovery; (ii) waste recycling; and (iii) elemental characterization. Pyrolysis is also receiving renewed attention for its operational, economical and environmental advantages. In this study, samples of polyethylene terephthalate (PET) and polystyrene (PS) were pyrolysed in a microthermobalance reactor (using a thermogravimetric-TGA setup). Both polymers were prepared and conditioned prior to experimentation. The main objective was to determine the kinetic parameters of the depolymerization reactions that occur within the thermal degradation process. Overall kinetic rate constants (ko) and activation energies (Eo) were determined using the general kinetics theory (GKT) method previously used by a number of authors. Fitted correlations were found and validated using the GKT, errors were within ± 5%. This study represents a fundamental step to pave the way towards the development of scaling relationship for the investigation of larger scale reactors relevant to industry.

Universities Strategic Evaluation Using Balanced Scorecard

Defining strategic position of the organizations within the industry environment is one of the basic and most important phases of strategic planning to which extent that one of the fundamental schools of strategic planning is the strategic positioning school. In today-s knowledge-based economy and dynamic environment, it is essential for universities as the centers of education, knowledge creation and knowledge worker evolvement. Till now, variant models with different approaches to strategic positioning are deployed in defining the strategic position within the various industries. Balanced Scorecard as one of the powerful models for strategic positioning, analyzes all aspects of the organization evenly. In this paper with the consideration of BSC strength in strategic evaluation, it is used for analyzing the environmental position of the best-s Iranian Business Schools. The results could be used in developing strategic plans for these schools as well as other Iranian Management and Business Schools.

The Problem of Power and Management in the Information Society

Modern civilization has come in recent decades into a new phase in its development, called the information society. The concept of "information society" has become one of the most common. Therefore, the attempt to understand what exactly the society we live in, what are its essential features, and possible future scenarios, is important to the social and philosophical analysis. At the heart of all these deep transformations is more increasing, almost defining role knowledge and information as play substrata of «information society». The mankind opened for itself and actively exploits a new resource – information. Information society puts forward on the arena new type of the power, at the heart of which activity – mastering by a new resource: information and knowledge. The password of the new power – intelligence as synthesis of knowledge, information and communications, the strength of mind, fundamental sociocultural values. In a postindustrial society, the power of knowledge and information is crucial in the management of the company, pushing into the background the influence of money and state coercion.

Comparison of Phylogenetic Trees of Multiple Protein Sequence Alignment Methods

Multiple sequence alignment is a fundamental part in many bioinformatics applications such as phylogenetic analysis. Many alignment methods have been proposed. Each method gives a different result for the same data set, and consequently generates a different phylogenetic tree. Hence, the chosen alignment method affects the resulting tree. However in the literature, there is no evaluation of multiple alignment methods based on the comparison of their phylogenetic trees. This work evaluates the following eight aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN, ProbCons and Align-m, based on their phylogenetic trees (test trees) produced on a given data set. The Neighbor-Joining method is used to estimate trees. Three criteria, namely, the dNNI, the dRF and the Id_Tree are established to test the ability of different alignment methods to produce closer test tree compared to the reference one (true tree). Results show that the method which produces the most accurate alignment gives the nearest test tree to the reference tree. MUSCLE outperforms all aligners with respect to the three criteria and for all datasets, performing particularly better when sequence identities are within 10-20%. It is followed by T-Coffee at lower sequence identity (30%), trees scores of all methods become similar.