Comparative Analysis of Total Phenolic Content in Sea Buckthorn Wine and Other Selected Fruit Wines

This is the first report from India on a beverage resulting from alcoholic fermentation of the juice of sea buckthorn (Hippophae rhamnoides L) using lab isolated yeast strain. The health promoting potential of the product was evaluated based on its total phenolic content. The most important finding was that under the present fermentation condition, the total phenolic content of the wine product was 689 mg GAE/L. Investigation of influence of bottle ageing on the sea buckthorn wine showed a slight decrease in the phenolic content (534 m mg GAE/L). This study also includes the comparative analysis of the phenolic content of wines from other selected fruit juices like grape, apple and black currant. KeywordsAlcoholic fermentation, Hippophae, Total phenolic content, Wine

Principal Type of Water Responsible for Damage of Concrete Repeated Freeze-Thaw Cycles

The first and basic cause of the failure of concrete is repeated freezing (thawing) of moisture contained in the pores, microcracks, and cavities of the concrete. On transition to ice, water existing in the free state in cracks increases in volume, expanding the recess in which freezing occurs. A reduction in strength below the initial value is to be expected and further cycle of freezing and thawing have a further marked effect. By using some experimental parameters like nuclear magnetic resonance variation (NMR), enthalpy-temperature (or heat capacity) variation, we can resolve between the various water states and their effect on concrete properties during cooling through the freezing transition temperature range. The main objective of this paper is to describe the principal type of water responsible for the reduction in strength and structural damage (frost damage) of concrete following repeated freeze –thaw cycles. Some experimental work was carried out at the institute of cryogenics to determine what happens to water in concrete during the freezing transition. 

Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Quantitative Indicator of Abdominal Aortic Aneurysm Rupture Risk Based on its Geometric Parameters

Abdominal aortic aneurysms rupture (AAAs) is one of the main causes of death in the world. This is a very complex phenomenon that usually occurs “without previous warning". Currently, criteria to assess the aneurysm rupture risk (peak diameter and growth rate) can not be considered as reliable indicators. In a first approach, the main geometric parameters of aneurysms have been linked into five biomechanical factors. These are combined to obtain a dimensionless rupture risk index, RI(t), which has been validated preliminarily with a clinical case and others from literature. This quantitative indicator is easy to understand, it allows estimating the aneurysms rupture risks and it is expected to be able to identify the one in aneurysm whose peak diameter is less than the threshold value. Based on initial results, a broader study has begun with twelve patients from the Clinic Hospital of Valladolid-Spain, which are submitted to periodic follow-up examinations.

Bioleaching of Heavy Metals from Sewage Sludge Using Indigenous Iron-Oxidizing Microorganisms: Effect of Substrate Concentration and Total Solids

In the present study, the effect of ferrous sulfate concentration and total solids on bioleaching of heavy metals from sewage sludge has been examined using indigenous iron-oxidizing microorganisms. The experiments on effects of ferrous sulfate concentrations on bioleaching were carried out using ferrous sulfate of different concentrations (5-20 g L-1) to optimize the concentration of ferrous sulfate for maximum bioleaching. A rapid change in the pH and ORP took place in first 2 days followed by a slow change till 16th day in all the sludge samples. A 10 g L-1 ferrous sulfate concentration was found to be sufficient in metal bioleaching in the following order: Zn: 69%>Cu: 52%>Cr: 46%>Ni: 45. Further, bioleaching using 10 g/L ferrous sulfate was found to be efficient up to 20 g L-1 sludge solids concentration. The results of the present study strongly indicate that using 10 g L-1 ferrous sulfate indigenous iron-oxidizing microorganisms can bring down pH to a value needed for significant metal solubilization.

Route Training in Mobile Robotics through System Identification

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

A Support System Applicable to Multiple APIs for Haptic VR Application Designers

This paper describes a proposed support system which enables applications designers to effectively create VR applications using multiple haptic APIs. When the VR designers create applications, it is often difficult to handle and understand many parameters and functions that have to be set in the application program using documentation manuals only. This complication may disrupt creative imagination and result in inefficient coding. So, we proposed the support application which improved the efficiency of VR applications development and provided the interactive components of confirmation of operations with haptic sense previously. In this paper, we describe improvements of our former proposed support application, which was applicable to multiple APIs and haptic devices, and evaluate the new application by having participants complete VR program. Results from a preliminary experiment suggest that our application facilitates creation of VR applications.

Concentrated Solar Power Utilization in Space Vehicles Propulsion and Power Generation

The objective from this paper is to design a solar thermal engine for space vehicles orbital control and electricity generation. A computational model is developed for the prediction of the solar thermal engine performance for different design parameters and conditions in order to enhance the engine efficiency. The engine is divided into two main subsystems. First, the concentrator dish which receives solar energy from the sun and reflects them to the cavity receiver. The second one is the cavity receiver which receives the heat flux reflected from the concentrator and transfers heat to the fluid passing over. Other subsystems depend on the application required from the engine. For thrust application, a nozzle is introduced to the system for the fluid to expand and produce thrust. Hydrogen is preferred as a working fluid in the thruster application. Results model developed is used to determine the thrust for a concentrator dish 4 meters in diameter (provides 10 kW of energy), focusing solar energy to a 10 cm aperture diameter cavity receiver. The cavity receiver outer length is 50 cm and the internal cavity is 47 cm in length. The suggested design material of the internal cavity is tungsten to withstand high temperature. The thermal model and analysis shows that the hydrogen temperature at the plenum reaches 2000oK after about 250 seconds for hot start operation for a flow rate of 0.1 g/sec.Using solar thermal engine as an electricity generation device on earth is also discussed. In this case a compressor and turbine are used to convert the heat gained by the working fluid (air) into mechanical power. This mechanical power can be converted into electrical power by using a generator.

Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks

Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.

A Novel Logarithmic Current-Controlled Current Amplifier (LCCA)

A new OTA-based logarithmic-control variable gain current amplifier (LCCA) is presented. It consists of two Operational Transconductance Amplifier (OTA) and two PMOS transistors biased in weak inversion region. The circuit operates from 0.6V DC power supply and consumes 0.6 μW. The linear-dB controllable output range is 43 dB with maximum error less than 0.5dB. The functionality of the proposed design was confirmed using HSPICE in 0.35μm CMOS process technology.

Tagging by Combining Rules- Based Method and Memory-Based Learning

Many natural language expressions are ambiguous, and need to draw on other sources of information to be interpreted. Interpretation of the e word تعاون to be considered as a noun or a verb depends on the presence of contextual cues. To interpret words we need to be able to discriminate between different usages. This paper proposes a hybrid of based- rules and a machine learning method for tagging Arabic words. The particularity of Arabic word that may be composed of stem, plus affixes and clitics, a small number of rules dominate the performance (affixes include inflexional markers for tense, gender and number/ clitics include some prepositions, conjunctions and others). Tagging is closely related to the notion of word class used in syntax. This method is based firstly on rules (that considered the post-position, ending of a word, and patterns), and then the anomaly are corrected by adopting a memory-based learning method (MBL). The memory_based learning is an efficient method to integrate various sources of information, and handling exceptional data in natural language processing tasks. Secondly checking the exceptional cases of rules and more information is made available to the learner for treating those exceptional cases. To evaluate the proposed method a number of experiments has been run, and in order, to improve the importance of the various information in learning.

Immobilization of Aspergillus awamori 1-8 for Subsequent Pectinase Production

The overall objective of this research is a strain improvement technology for efficient pectinase production. A novel cells cultivation technology by immobilization of fungal cells has been studied in long time continuous fermentations. Immobilization was achieved by using of new material for absorption of stores of immobilized cultures which was for the first time used for immobilization of microorganisms. Effects of various conditions of nitrogen and carbon nutrition on the biosynthesis of pectolytic enzymes in Aspergillus awamori 1-8 strain were studied. Proposed cultivation technology along with optimization of media components for pectinase overproduction led to increased pectinase productivity in Aspergillus awamori 1-8 from 7 to 8 times. Proposed technology can be applied successfully for production of major industrial enzymes such as α-amylase, protease, collagenase etc.

A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Interoperability in Component Based Software Development

The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.

Effects of Multimedia-based Instructional Designs for Arabic Language Learning among Pupils of Different Achievement Levels

The purpose of this study is to investigate the effects of modality principles in instructional software among first grade pupils- achievements in the learning of Arabic Language. Two modes of instructional software were systematically designed and developed, audio with images (AI), and text with images (TI). The quasi-experimental design was used in the study. The sample consisted of 123 male and female pupils from IRBED Education Directorate, Jordan. The pupils were randomly assigned to any one of the two modes. The independent variable comprised the two modes of the instructional software, the students- achievement levels in the Arabic Language class and gender. The dependent variable was the achievements of the pupils in the Arabic Language test. The theoretical framework of this study was based on Mayer-s Cognitive Theory of Multimedia Learning. Four hypotheses were postulated and tested. Analyses of Variance (ANOVA) showed that pupils using the (AI) mode performed significantly better than those using (TI) mode. This study concluded that the audio with images mode was an important aid to learning as compared to text with images mode.

Research of the Main Indexes of Freshness Anchovy (Engraulis engrasicolus Linnaeus, 1758) and Sardines (Sardina pilchardus Walbaum 1792) of Mediterranean

Anchovy (Engraulis Encrasicholus) and sardine (Sardina Pilchardus) are blue fishes linked to our alimentary tradition of Mediterranean. In our work, particularly, we tested for the first time physical and enzymatic methods to verify the freshness of species of blue fish, anchovy and sardine of Mediterranean. In connection with to the lowering of the pH after post-mortem stage we assisted to a increase in proteolytic activity of calpaine and catpsine. Already after 2 h in post-mortem there was a significant increase.

Post Elevated Temperature Effect on the Strength and Microstructure of Thin High Performance Cementitious Composites (THPCC)

Reinforced Concrete (RC) structures strengthened with fiber reinforced polymer (FRP) lack in thermal resistance under elevated temperatures in the event of fire. This phenomenon led to the lining of strengthened concrete with thin high performance cementitious composites (THPCC) to protect the substrate against elevated temperature. Elevated temperature effects on THPCC, based on different cementitious materials have been studied in the past but high-alumina cement (HAC)-based THPCC have not been well characterized. This research study will focus on the THPCC based on HAC replaced by 60%, 70%, 80% and 85% of ground granulated blast furnace slag (GGBS). Samples were evaluated by the measurement of their mechanical strength (28 & 56 days of curing) after exposed to 400°C, 600°C and 28°C of room temperature for comparison and corroborated by their microstructure study. Results showed that among all mixtures, the mix containing only HAC showed the highest compressive strength after exposed to 600°C as compared to other mixtures. However, the tensile strength of THPCC made of HAC and 60% GGBS content was comparable to the THPCC with HAC only after exposed to 600°C. Field emission scanning electron microscopy (FESEM) images of THPCC accompanying Energy Dispersive X-ray (EDX) microanalysis revealed that the microstructure deteriorated considerably after exposure to elevated temperatures which led to the decrease in mechanical strength.

Forward Simulation of a Parallel Hybrid Vehicle and Fuzzy Controller Design for Driving/Regenerative Propose

One of the best ways for achievement of conventional vehicle changing to hybrid case is trustworthy simulation result and using of driving realities. For this object, in this paper, at first sevendegree- of-freedom dynamical model of vehicle will be shown. Then by using of statically model of engine, gear box, clutch, differential, electrical machine and battery, the hybrid automobile modeling will be down and forward simulation of vehicle for pedals to wheels power transformation will be obtained. Then by design of a fuzzy controller and using the proper rule base, fuel economy and regenerative braking will be marked. Finally a series of MATLAB/SIMULINK simulation results will be proved the effectiveness of proposed structure.

Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

DCBOR: A Density Clustering Based on Outlier Removal

Data clustering is an important data exploration technique with many applications in data mining. We present an enhanced version of the well known single link clustering algorithm. We will refer to this algorithm as DCBOR. The proposed algorithm alleviates the chain effect by removing the outliers from the given dataset. So this algorithm provides outlier detection and data clustering simultaneously. This algorithm does not need to update the distance matrix, since the algorithm depends on merging the most k-nearest objects in one step and the cluster continues grow as long as possible under specified condition. So the algorithm consists of two phases; at the first phase, it removes the outliers from the input dataset. At the second phase, it performs the clustering process. This algorithm discovers clusters of different shapes, sizes, densities and requires only one input parameter; this parameter represents a threshold for outlier points. The value of the input parameter is ranging from 0 to 1. The algorithm supports the user in determining an appropriate value for it. We have tested this algorithm on different datasets contain outlier and connecting clusters by chain of density points, and the algorithm discovers the correct clusters. The results of our experiments demonstrate the effectiveness and the efficiency of DCBOR.