House Indoor Thermal and Health Conditions with Different Passive Designs

According to the Auckland climate, building passive design more focus on improving winter indoor thermal and health conditions. Based on field study data of indoor air temperature and relative humidity close to ceiling and floor of an insulated Auckland townhouse with and without a whole home mechanical ventilation system, this study is to analysis variation of indoor microclimate data of an Auckland townhouse using or not using the mechanical ventilation system to evaluate winter indoor thermal and health conditions for the future house design with a mechanical ventilation system.

A Probability based Pair Extension Method in Protein 2-DE Gel Image Analysis

The two-dimensional gel electrophoresis method (2-DE) is widely used in Proteomics to separate thousands of proteins in a sample. By comparing the protein expression levels of proteins in a normal sample with those in a diseased one, it is possible to identify a meaningful set of marker proteins for the targeted disease. The major shortcomings of this approach involve inherent noises and irregular geometric distortions of spots observed in 2-DE images. Various experimental conditions can be the major causes of these problems. In the protein analysis of samples, these problems eventually lead to incorrect conclusions. In order to minimize the influence of these problems, this paper proposes a partition based pair extension method that performs spot-matching on a set of gel images multiple times and segregates more reliable mapping results which can improve the accuracy of gel image analysis. The improved accuracy of the proposed method is analyzed through various experiments on real 2-DE images of human liver tissues.

The Investigation of 5th Grade Turkish Students- Comprehension Scores According to Different Variables

The aim of this study is to examine the reading comprehension scores of Turkish 5th grade students according to the variables given in the student questionnaire. In this descriptive survey study research participated 279 5th grade students, who studied at 10 different primary schools in four provinces of Ankara in 2008-2009 academic year. Two different data collection tools were made use of in the study: “Reading Comprehension Test" and “Student Information Questionnaire". Independent sample t-test, oneway Anova and two-way Anova tests were used in the analyses of the gathered data. The results of the study indicate that the reading comprehension scores of the students differ significantly according to sex of the students, the number of books in their houses, the frequency of summarizing activities on the reading text of free and the frequency reading hours provided by their teachers; but, differ not significantly according to educational level of their mothers and fathers.

Development of Maximum Entropy Method for Prediction of Droplet-size Distribution in Primary Breakup Region of Spray

Droplet size distributions in the cold spray of a fuel are important in observed combustion behavior. Specification of droplet size and velocity distributions in the immediate downstream of injectors is also essential as boundary conditions for advanced computational fluid dynamics (CFD) and two-phase spray transport calculations. This paper describes the development of a new model to be incorporated into maximum entropy principle (MEP) formalism for prediction of droplet size distribution in droplet formation region. The MEP approach can predict the most likely droplet size and velocity distributions under a set of constraints expressing the available information related to the distribution. In this article, by considering the mechanisms of turbulence generation inside the nozzle and wave growth on jet surface, it is attempted to provide a logical framework coupling the flow inside the nozzle to the resulting atomization process. The purpose of this paper is to describe the formulation of this new model and to incorporate it into the maximum entropy principle (MEP) by coupling sub-models together using source terms of momentum and energy. Comparison between the model prediction and experimental data for a gas turbine swirling nozzle and an annular spray indicate good agreement between model and experiment.

SySRA: A System of a Continuous Speech Recognition in Arab Language

We report in this paper the model adopted by our system of continuous speech recognition in Arab language SySRA and the results obtained until now. This system uses the database Arabdic-10 which is a corpus of word for the Arab language and which was manually segmented. Phonetic decoding is represented by an expert system where the knowledge base is translated in the form of production rules. This expert system transforms a vocal signal into a phonetic lattice. The higher level of the system takes care of the recognition of the lattice thus obtained by deferring it in the form of written sentences (orthographical Form). This level contains initially the lexical analyzer which is not other than the module of recognition. We subjected this analyzer to a set of spectrograms obtained by dictating a score of sentences in Arab language. The rate of recognition of these sentences is about 70% which is, to our knowledge, the best result for the recognition of the Arab language. The test set consists of twenty sentences from four speakers not having taken part in the training.

A Note on Penalized Power-Divergence Test Statistics

In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.

A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

The Impact of Social Stratification to the Phenomenon of “Terrorism“

In this work social stratification is considered as one of significant factor which generate the phenomena “terrorism” and it puts the accent on correlation connection between them, with the object of creation info-logical model generation of phenomena of “terrorism” based on stratification process.

Optimization the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods

Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.

Evaluation of Antioxidant Properties of Barberry Fruits Extracts Using Maceration and Subcritical Water Extraction (SWE)

The quality and shelf life of foods of containing lipids (fats and oils) significantly reduces due to rancidity.Applications of natural antioxidants are one of the most effective manners to prevent the oxidation of oils and lipids. The antioxidant properties of juice extracted from barberry fruit (Berberris vulgaris.L) using maceration and SWE (10 bars and 120 - 180°C) methods were investigated and compared with conventional method. The amount of phenolic compound and reduction power of all samples were determined and the data were statistically analyzed using multifactor design. The results showed that the total amount of phenolic compound increased with increasing of pressure and temprature from 1861.9 to 2439.1 (mg Gallic acid /100gr Dry matter). The ability of reduction power of SWE obtained antioxidant extract compared with BHA (synthetic antioxidant) and ascorbic acid (natural antioxidant). There were significant differences among reduction power of extracts and there were remarkable difference with BHA and Ascorbic acid (P

Harris Extraction and SIFT Matching for Correlation of Two Tablets

This article presents the developments of efficient algorithms for tablet copies comparison. Image recognition has specialized use in digital systems such as medical imaging, computer vision, defense, communication etc. Comparison between two images that look indistinguishable is a formidable task. Two images taken from different sources might look identical but due to different digitizing properties they are not. Whereas small variation in image information such as cropping, rotation, and slight photometric alteration are unsuitable for based matching techniques. In this paper we introduce different matching algorithms designed to facilitate, for art centers, identifying real painting images from fake ones. Different vision algorithms for local image features are implemented using MATLAB. In this framework a Table Comparison Computer Tool “TCCT" is designed to facilitate our research. The TCCT is a Graphical Unit Interface (GUI) tool used to identify images by its shapes and objects. Parameter of vision system is fully accessible to user through this graphical unit interface. And then for matching, it applies different description technique that can identify exact figures of objects.

The Content of Acrylamide in Deep-fat Fried, Shallow Fried and Roasted Potatoes

Potato is one of the main components of warm meals in Latvia. Consumption of fried potatoes in Latvia is the highest comparing to Nordic and other Baltic countries. Therefore acrylamide (AA) intake coming from fried potatoes in population might be high as well. The aim of the research was to determine AA content in traditionally cooked potatoes bred and cultivated in Latvia. Five common Latvian potato varieties were selected: Lenora, Brasla, Imanta, Zile and Madara. A two-year research was conducted during two periods: just after harvesting and after six months of storage. The following cooking methods were used: shallow frying (150 ± 5 °C); deep-fat frying (180 ± 5 °C) and roasting (210 ± 5 °C). Time and temperature was recorded during frying. AA was extracted from potatoes by solid phase extraction and AA content was determined by LC-MS/MS. AA content significantly differs (p

Optimal and Generalized Multiple Descriptions Image Coding Transform in the Wavelet Domain

In this paper we propose a Multiple Description Image Coding(MDIC) scheme to generate two compressed and balanced rates descriptions in the wavelet domain (Daubechies biorthogonal (9, 7) wavelet) using pairwise correlating transform optimal and application method for Generalized Multiple Description Coding (GMDC) to image coding in the wavelet domain. The GMDC produces statistically correlated streams such that lost streams can be estimated from the received data. Our performance test shown that the proposed method gives more improvement and good quality of the reconstructed image when the wavelet coefficients are normalized by Gaussian Scale Mixture (GSM) model then the Gaussian one ,.

Modeling and Analysis of Twelve-phase (Multi- Phase) DSTATCOM for Multi-Phase Load Circuits

This paper presents modeling and analysis of 12-phase distribution static compensator (DSTATCOM), which is capable of balancing the source currents in spite of unbalanced loading and phase outages. In addition to balance the supply current, the power factor can be set to a desired value. The theory of instantaneous symmetrical components is used to generate the twelve-phase reference currents. These reference currents are then tracked using current controlled voltage source inverter, operated in a hysteresis band control scheme. An ideal compensator in place of physical realization of the compensator is used. The performance of the proposed DTATCOM is validated through MATLAB simulation and detailed simulation results are given.

Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings

The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.

Intelligent Vision System for Human-Robot Interface

This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.

User Experience Evolution Lifecycle Framework

Perceptions of quality from both designers and users perspective have now stretched beyond the traditional usability, incorporating abstract and subjective concepts. This has led to a shift in human computer interaction research communities- focus; a shift that focuses on achieving user experience (UX) by not only fulfilling conventional usability needs but also those that go beyond them. The term UX, although widely spread and given significant importance, lacks consensus in its unified definition. In this paper, we survey various UX definitions and modeling frameworks and examine them as the foundation for proposing a UX evolution lifecycle framework for understanding UX in detail. In the proposed framework we identify the building blocks of UX and discuss how UX evolves in various phases. The framework can be used as a tool to understand experience requirements and evaluate them, resulting in better UX design and hence improved user satisfaction.

Covering-based Rough sets Based on the Refinement of Covering-element

Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.

Using a Semantic Self-Organising Web Page-Ranking Mechanism for Public Administration and Education

In the proposed method for Web page-ranking, a novel theoretic model is introduced and tested by examples of order relationships among IP addresses. Ranking is induced using a convexity feature, which is learned according to these examples using a self-organizing procedure. We consider the problem of selforganizing learning from IP data to be represented by a semi-random convex polygon procedure, in which the vertices correspond to IP addresses. Based on recent developments in our regularization theory for convex polygons and corresponding Euclidean distance based methods for classification, we develop an algorithmic framework for learning ranking functions based on a Computational Geometric Theory. We show that our algorithm is generic, and present experimental results explaining the potential of our approach. In addition, we explain the generality of our approach by showing its possible use as a visualization tool for data obtained from diverse domains, such as Public Administration and Education.