Prediction of Location of High Energy Shower Cores using Artificial Neural Networks

Artificial Neural Network (ANN)s can be modeled for High Energy Particle analysis with special emphasis on shower core location. The work describes the use of an ANN based system which has been configured to predict locations of cores of showers in the range 1010.5 to 1020.5 eV. The system receives density values as inputs and generates coordinates of shower events recorded for values captured by 20 core positions and 80 detectors in an area of 100 meters. Twenty ANNs are trained for the purpose and the positions of shower events optimized by using cooperative ANN learning. The results derived with variations of input upto 50% show success rates in the range of 90s.

The Use of Local Knowledge and its Transferfor Community Self-Protection Development in Flood Prone Residential Area

This paper aims to study at the use of local knowledge to develop community self-protection in flood prone residential area, Ayutthaya Island has been chosen as a case study. This study tries to examine the strength of local knowledge which is able to develop community self-protection and cope with flood disaster. In-depth, this paper focuses on the influence of social network on knowledge transfer. After conducted the research, authors reviewed the strength of local knowledge and also mentioned the obstacles of community to use and also transfer local knowledge. Moreover, the result of the study revealed that local knowledge is not always transferred by the strongest-tie social network (family or kinship) as we used to believe. Surprisingly, local knowledge could be also transferred by the weaker-tie social network (teacher/ monk) with the better effectiveness in some knowledge.

Distributed Generator Placement for Loss Reduction and Improvement in Reliability

Distributed Power generation has gained a lot of attention in recent times due to constraints associated with conventional power generation and new advancements in DG technologies .The need to operate the power system economically and with optimum levels of reliability has further led to an increase in interest in Distributed Generation. However it is important to place Distributed Generator on an optimum location so that the purpose of loss minimization and voltage regulation is dully served on the feeder. This paper investigates the impact of DG units installation on electric losses, reliability and voltage profile of distribution networks. In this paper, our aim would be to find optimal distributed generation allocation for loss reduction subjected to constraint of voltage regulation in distribution network. The system is further analyzed for increased levels of Reliability. Distributed Generator offers the additional advantage of increase in reliability levels as suggested by the improvements in various reliability indices such as SAIDI, CAIDI and AENS. Comparative studies are performed and related results are addressed. An analytical technique is used in order to find the optimal location of Distributed Generator. The suggested technique is programmed under MATLAB software. The results clearly indicate that DG can reduce the electrical line loss while simultaneously improving the reliability of the system.

Closely Parametrical Model for an Electrical Arc Furnace

To maximise furnace production it-s necessary to optimise furnace control, with the objectives of achieving maximum power input into the melting process, minimum network distortion and power-off time, without compromise on quality and safety. This can be achieved with on the one hand by an appropriate electrode control and on the other hand by a minimum of AC transformer switching. Electrical arc is a stochastic process; witch is the principal cause of power quality problems, including voltages dips, harmonic distortion, unbalance loads and flicker. So it is difficult to make an appropriate model for an Electrical Arc Furnace (EAF). The factors that effect EAF operation are the melting or refining materials, melting stage, electrode position (arc length), electrode arm control and short circuit power of the feeder. So arc voltages, current and power are defined as a nonlinear function of the arc length. In this article we propose our own empirical function of the EAF and model, for the mean stages of the melting process, thanks to the measurements in the steel factory.

Computational Design of Inhibitory Agents of BMP-Noggin Interaction to Promote Osteogenesis

Bone growth factors, such as Bone Morphogenic Protein-2 (BMP-2) have been approved by the FDA to replace grafting for some surgical interventions, but the high dose requirement limits its use in patients. Noggin, an extracellular protein, blocks the effect of BMP-2 by binding to BMP. Preventing the BMP-2/noggin interaction will help increase the free concentration of BMP-2 and therefore should enhance its efficacy to induce bone formation. The work presented here involves computational design of novel small molecule inhibitory agents of BMP-2/noggin interaction, based on our current understanding of BMP-2, and its known putative ligands (receptors and antagonists). A successful acquisition of such an inhibitory agent of BMP-2/noggin interaction would allow clinicians to reduce the dose required of BMP-2 protein in clinical applications to promote osteogenesis. The available crystal structures of the BMPs, its receptors, and the binding partner noggin were analyzed to identify the critical residues involved in their interaction. In presenting this study, LUDI de novo design method was utilized to perform virtual screening of a large number of compounds from a commercially available library against the binding sites of noggin to identify the lead chemical compounds that could potentially block BMP-noggin interaction with a high specificity.

New EEM/BEM Hybrid Method for Electric Field Calculation in Cable Joints

A power cable is widely used for power supply in power distributing networks and power transmission lines. Due to limitations in the production, delivery and setting up power cables, they are produced and delivered in several separate lengths. Cable itself, consists of two cable terminations and arbitrary number of cable joints, depending on the cable route length. Electrical stress control is needed to prevent a dielectric breakdown at the end of the insulation shield in both the air and cable insulation. Reliability of cable joint depends on its materials, design, installation and operating environment. The paper describes design and performance results for new modeled cable joints. Design concepts, based on numerical calculations, must be correct. An Equivalent Electrodes Method/Boundary Elements Method-hybrid approach that allows electromagnetic field calculations in multilayer dielectric media, including inhomogeneous regions, is presented.

Evaluation of Internet Anxiety in SRBIAU Higher Education Students in Research Process

Increase in using internet makes some problems that one of them is "internet anxiety". Internet anxiety is a type of anxious that people may feel during surfing internet or using internet for their educational purpose, blogging or streaming to digital libraries. The goal of this study is evaluating of internet anxiety among the management students. In this research Ealy's internet anxiety questionnaire, consists of positive and negative items, is completed by 310 participants. According to the findings, about 64.7% of them were equal or below to mean anxiety score (50). The distribution of internet anxiety scores was normal and there was no meaningful difference between men-s and women's anxiety level in this sample. Results also showed that there is no meaningful difference of internet anxiety level between different fields of study in Management. This evaluation will help managers to perform gap analysis between the existent level and the desired one. Future work would be providing techniques for abating human anxiety while using internet via human computer interaction techniques.

Expanding Affordable Housing through Inclusionary Zoning in the City of Toronto

Reasonably priced and well-constructed housing must be an integral and element supporting a healthy society. The absence of housing everyone in society can afford negatively affects the people's health, education, ability to get jobs, develop their community. Without access to decent housing, economic development, integration of immigrants and inclusiveness, the society is negatively impacted. Canada has a sterling record in creating housing compared to many other nations around the globe. Canadian housing gets support from a mature and responsive mortgage network and a top-quality construction industry as well as safe and excellent quality building materials that are readily available. Yet 1.7 million Canadian households occupy substandard abodes. During the past hundred years, Canada's government has made a wide variety of attempts to provide decent residential facilities every Canadian can afford. Despite these laudable efforts, today Canada is left with housing that is inadequate for many Canadians. People who own their housing are given all kinds of privileges and perks, while people with relatively low incomes who rent their apartments or houses are discriminated against. To help solve these problems, zoning that is based on an "inclusionary" philosophy is tool developed to help provide people the affordable residences that they need. No, thirty years after its introduction, this type of zoning has been shown effective in helping build and provide Canadians with a houses or apartments they can afford to pay for. Using this form of zoning can have different results +depending on where and how it is used. After examining Canadian affordable housing and four American cases where this type of zoning was enforced in the USA, this makes various recommendations for expanding Canadians' access to housing they can afford.

Zero Dimensional Simulation of Combustion Process of a DI Diesel Engine Fuelled With Biofuels

A zero dimensional model has been used to investigate the combustion performance of a single cylinder direct injection diesel engine fueled by biofuels with options like supercharging and exhaust gas recirculation. The numerical simulation was performed at constant speed. The indicated pressure, temperature diagrams are plotted and compared for different fuels. The emissions of soot and nitrous oxide are computed with phenomenological models. The experimental work was also carried out with biodiesel (palm stearin methyl ester) diesel blends, ethanol diesel blends to validate simulation results with experimental results, and observed that the present model is successful in predicting the engine performance with biofuels.

Artificial Neural Network Model for a Low Cost Failure Sensor: Performance Assessment in Pipeline Distribution

This paper describes an automated event detection and location system for water distribution pipelines which is based upon low-cost sensor technology and signature analysis by an Artificial Neural Network (ANN). The development of a low cost failure sensor which measures the opacity or cloudiness of the local water flow has been designed, developed and validated, and an ANN based system is then described which uses time series data produced by sensors to construct an empirical model for time series prediction and classification of events. These two components have been installed, tested and verified in an experimental site in a UK water distribution system. Verification of the system has been achieved from a series of simulated burst trials which have provided real data sets. It is concluded that the system has potential in water distribution network management.

Effect of Using Stone Cutting Waste on the Compression Strength and Slump Characteristics of Concrete

The aim of this work is to study the possible use of stone cutting sludge waste in concrete production, which would reduce both the environmental impact and the production cost .Slurry sludge was used a source of water in concrete production, which was obtained from Samara factory/Jordan, The physico-chemical and mineralogical characterization of the sludge was carried out to identify the major components and to compare it with the typical sand used to produce concrete. Samples analysis showed that 96% of slurry sludge volume is water, so it should be considered as an important source of water. Results indicated that the use of slurry sludge as water source in concrete production has insignificant effect on compression strength, while it has a sharp effect on the slump values. Using slurry sludge with a percentage of 25% of the total water content obtained successful concrete samples regarding slump and compression tests. To clarify slurry sludge, settling process can be used to remove the suspended solid. A settling period of 30 min. obtained 99% removal efficiency. The clarified water is suitable for using in concrete mixes, which reduce water consumption, conserve water recourses, increase the profit, reduce operation cost and save the environment. Additionally, the dry sludge could be used in the mix design instead of the fine materials with sizes < 160 um. This application could conserve the natural materials and solve the environmental and economical problem caused by sludge accumulation.

Video-Based Tracking of Laparoscopic Instruments Using an Orthogonal Webcams System

This paper presents a system for tracking the movement of laparoscopic instruments which is based on an orthogonal system of webcams and video image processing. The movements are captured with two webcams placed orthogonally inside of the physical trainer. On the image, the instruments were detected by using color markers placed on the distal tip of each instrument. The 3D position of the tip of the instrument within the work space was obtained by linear triangulation method. Preliminary results showed linearity and repeatability in the motion tracking with a resolution of 0.616 mm in each axis; the accuracy of the system showed a 3D instrument positioning error of 1.009 ± 0.101 mm. This tool is a portable and low-cost alternative to traditional tracking devices and a trustable method for the objective evaluation of the surgeon’s surgical skills.

Elastic-Plastic Contact Analysis of Single Layer Solid Rough Surface Model using FEM

Evaluation of contact pressure, surface and subsurface contact stresses are essential to know the functional response of surface coatings and the contact behavior mainly depends on surface roughness, material property, thickness of layer and the manner of loading. Contact parameter evaluation of real rough surface contacts mostly relies on statistical single asperity contact approaches. In this work, a three dimensional layered solid rough surface in contact with a rigid flat is modeled and analyzed using finite element method. The rough surface of layered solid is generated by FFT approach. The generated rough surface is exported to a finite element method based ANSYS package through which the bottom up solid modeling is employed to create a deformable solid model with a layered solid rough surface on top. The discretization and contact analysis are carried by using the same ANSYS package. The elastic, elastoplastic and plastic deformations are continuous in the present finite element method unlike many other contact models. The Young-s modulus to yield strength ratio of layer is varied in the present work to observe the contact parameters effect while keeping the surface roughness and substrate material properties as constant. The contacting asperities attain elastic, elastoplastic and plastic states with their continuity and asperity interaction phenomena is inherently included. The resultant contact parameters show that neighboring asperity interaction and the Young-s modulus to yield strength ratio of layer influence the bulk deformation consequently affect the interface strength.

Natural Gas Dehydration Process Simulation and Optimization: A Case Study of Khurmala Field in Iraqi Kurdistan Region

Natural gas is the most popular fossil fuel in the current era and future as well. Natural gas is existed in underground reservoirs so it may contain many of non-hydrocarbon components for instance, hydrogen sulfide, nitrogen and water vapor. These impurities are undesirable compounds and cause several technical problems for example, corrosion and environment pollution. Therefore, these impurities should be reduce or removed from natural gas stream. Khurmala dome is located in southwest Erbil-Kurdistan region. The Kurdistan region government has paid great attention for this dome to provide the fuel for Kurdistan region. However, the Khurmala associated natural gas is currently flaring at the field. Moreover, nowadays there is a plan to recover and trade this gas and to use it either as feedstock to power station or to sell it in global market. However, the laboratory analysis has showed that the Khurmala sour gas has huge quantities of H2S about (5.3%) and CO2 about (4.4%). Indeed, Khurmala gas sweetening process has been removed in previous study by using Aspen HYSYS. However, Khurmala sweet gas still contents some quintets of water about 23 ppm in sweet gas stream. This amount of water should be removed or reduced. Indeed, water content in natural gas cause several technical problems such as hydrates and corrosion. Therefore, this study aims to simulate the prospective Khurmala gas dehydration process by using Aspen HYSYS V. 7.3 program. Moreover, the simulation process succeeded in reducing the water content to less than 0.1ppm. In addition, the simulation work is also achieved process optimization by using several desiccant types for example, TEG and DEG and it also study the relationship between absorbents type and its circulation rate with HCs losses from glycol regenerator tower.

High Level Synthesis of Kahn Process Networks(KPN) for Streaming Applications

Streaming Applications usually run in parallel or in series that incrementally transform a stream of input data. It poses a design challenge to break such an application into distinguishable blocks and then to map them into independent hardware processing elements. For this, there is required a generic controller that automatically maps such a stream of data into independent processing elements without any dependencies and manual considerations. In this paper, Kahn Process Networks (KPN) for such streaming applications is designed and developed that will be mapped on MPSoC. This is designed in such a way that there is a generic Cbased compiler that will take the mapping specifications as an input from the user and then it will automate these design constraints and automatically generate the synthesized RTL optimized code for specified application.

Radiation Dose Distribution for Workers in South Korean Nuclear Power Plants

A total of 33,680 nuclear power plants (NPPs) workers were monitored and recorded from 1990 to 2007. According to the record, the average individual radiation dose has been decreasing continually from it 3.20 mSv/man in 1990 to 1.12 mSv/man at the end of 2007. After the International Commission on Radiological Protection (ICRP) 60 recommendation was generalized in South Korea, no nuclear power plant workers received above 20 mSv radiation, and the numbers of relatively highly exposed workers have been decreasing continuously. The age distribution of radiation workers in nuclear power plants was composed of mainly 20-30- year-olds (83%) for 1990 ~ 1994 and 30-40-year-olds (75%) for 2003 ~ 2007. The difference in individual average dose by age was not significant. Most (77%) of NPP radiation exposures from 1990 to 2007 occurred mostly during the refueling period. With regard to exposure type, the majority of exposures were external exposures, representing 95% of the total exposures, while internal exposures represented only 5%. External effective dose was affected mainly by gamma radiation exposure, with an insignificant amount of neutron exposure. As for internal effective dose, tritium (3H) in the pressurized heavy water reactor (PHWR) was the biggest cause of exposure.

Design and Bandwidth Allocation of Embedded ATM Networks using Genetic Algorithm

In this paper, genetic algorithm (GA) is proposed for the design of an optimization algorithm to achieve the bandwidth allocation of ATM network. In Broadband ISDN, the ATM is a highbandwidth; fast packet switching and multiplexing technique. Using ATM it can be flexibly reconfigure the network and reassign the bandwidth to meet the requirements of all types of services. By dynamically routing the traffic and adjusting the bandwidth assignment, the average packet delay of the whole network can be reduced to a minimum. M/M/1 model can be used to analyze the performance.

Using Mean-Shift Tracking Algorithms for Real-Time Tracking of Moving Images on an Autonomous Vehicle Testbed Platform

This paper describes new computer vision algorithms that have been developed to track moving objects as part of a long-term study into the design of (semi-)autonomous vehicles. We present the results of a study to exploit variable kernels for tracking in video sequences. The basis of our work is the mean shift object-tracking algorithm; for a moving target, it is usual to define a rectangular target window in an initial frame, and then process the data within that window to separate the tracked object from the background by the mean shift segmentation algorithm. Rather than use the standard, Epanechnikov kernel, we have used a kernel weighted by the Chamfer distance transform to improve the accuracy of target representation and localization, minimising the distance between the two distributions in RGB color space using the Bhattacharyya coefficient. Experimental results show the improved tracking capability and versatility of the algorithm in comparison with results using the standard kernel. These algorithms are incorporated as part of a robot test-bed architecture which has been used to demonstrate their effectiveness.

Statistical Models of Network Traffic

Model-based approaches have been applied successfully to a wide range of tasks such as specification, simulation, testing, and diagnosis. But one bottleneck often prevents the introduction of these ideas: Manual modeling is a non-trivial, time-consuming task. Automatically deriving models by observing and analyzing running systems is one possible way to amend this bottleneck. To derive a model automatically, some a-priori knowledge about the model structure–i.e. about the system–must exist. Such a model formalism would be used as follows: (i) By observing the network traffic, a model of the long-term system behavior could be generated automatically, (ii) Test vectors can be generated from the model, (iii) While the system is running, the model could be used to diagnose non-normal system behavior. The main contribution of this paper is the introduction of a model formalism called 'probabilistic regression automaton' suitable for the tasks mentioned above.

Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.