Assessing the Effects of Explosion Waves on Office and Residential Buildings

Explosions may cause intensive damage to buildings and sometimes lead to total and progressive destruction. Pressures induced by explosions are one of the most destructive loads a structure may experience. While designing structures for great explosions may be expensive and impractical, engineers are looking for methods for preventing destructions resulted from explosions. A favorable structural system is a system which does not disrupt totally due to local explosion, since such structures sustain less loss in comparison with structural ones which really bear the load and suddenly disrupt. Designing and establishing vital and necessary installations in a way that it is resistant against direct hit of bomb and rocket is not practical, economical, or expedient in many cases, because the cost of construction and installation with such specifications is several times more than the total cost of the related equipment.

Improvement of the Reliability of the Industrial Electric Networks

The continuity in the electric supply of the electric installations is becoming one of the main requirements of the electric supply network (generation, transmission, and distribution of the electric energy). The achievement of this requirement depends from one side on the structure of the electric network and on the other side on the avaibility of the reserve source provided to maintain the supply in case of failure of the principal one. The avaibility of supply does not only depends on the reliability parameters of the both sources (principal and reserve) but it also depends on the reliability of the circuit breaker which plays the role of interlocking the reserve source in case of failure of the principal one. In addition, the principal source being under operation, its control can be ideal and sure, however, for the reserve source being in stop, a preventive maintenances which proceed on time intervals (periodicity) and for well defined lengths of time are envisaged, so that this source will always available in case of the principal source failure. The choice of the periodicity of preventive maintenance of the source of reserve influences directly the reliability of the electric feeder system In this work and on the basis of the semi- markovian's processes, the influence of the time of interlocking the reserve source upon the reliability of an industrial electric network is studied and is given the optimal time of interlocking the reserve source in case of failure the principal one, also the influence of the periodicity of the preventive maintenance of the source of reserve is studied and is given the optimal periodicity.

OPTIMAL Placement of FACTS Devices by Genetic Algorithm for the Increased Load Ability of a Power System

This paper presents Genetic Algorithm (GA) based approach for the allocation of FACTS (Flexible AC Transmission System) devices for the improvement of Power transfer capacity in an interconnected Power System. The GA based approach is applied on IEEE 30 BUS System. The system is reactively loaded starting from base to 200% of base load. FACTS devices are installed in the different locations of the power system and system performance is noticed with and without FACTS devices. First, the locations, where the FACTS devices to be placed is determined by calculating active and reactive power flows in the lines. Genetic Algorithm is then applied to find the amount of magnitudes of the FACTS devices. This approach of GA based placement of FACTS devices is tremendous beneficial both in terms of performance and economy is clearly observed from the result obtained.

Dynamic Analyses for Passenger Volume of Domestic Airline and High Speed Rail

Discrete choice model is the most used methodology for studying traveler-s mode choice and demand. However, to calibrate the discrete choice model needs to have plenty of questionnaire survey. In this study, an aggregative model is proposed. The historical data of passenger volumes for high speed rail and domestic civil aviation are employed to calibrate and validate the model. In this study, different models are compared so as to propose the best one. From the results, systematic equations forecast better than single equation do. Models with the external variable, which is oil price, are better than models based on closed system assumption.

Low Energy Method for Data Delivery in Ubiquitous Network

Recent advances in wireless sensor networks have led to many routing methods designed for energy-efficiency in wireless sensor networks. Despite that many routing methods have been proposed in USN, a single routing method cannot be energy-efficient if the environment of the ubiquitous sensor network varies. We present the controlling network access to various hosts and the services they offer, rather than on securing them one by one with a network security model. When ubiquitous sensor networks are deployed in hostile environments, an adversary may compromise some sensor nodes and use them to inject false sensing reports. False reports can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. The interleaved hop-by-hop authentication scheme detects such false reports through interleaved authentication. This paper presents a LMDD (Low energy method for data delivery) algorithm that provides energy-efficiency by dynamically changing protocols installed at the sensor nodes. The algorithm changes protocols based on the output of the fuzzy logic which is the fitness level of the protocols for the environment.

Low Cost Chip Set Selection Algorithm for Multi-way Partitioning of Digital System

This paper considers the problem of finding low cost chip set for a minimum cost partitioning of a large logic circuits. Chip sets are selected from a given library. Each chip in the library has a different price, area, and I/O pin. We propose a low cost chip set selection algorithm. Inputs to the algorithm are a netlist and a chip information in the library. Output is a list of chip sets satisfied with area and maximum partitioning number and it is sorted by cost. The algorithm finds the sorted list of chip sets from minimum cost to maximum cost. We used MCNC benchmark circuits for experiments. The experimental results show that all of chip sets found satisfy the multiple partitioning constraints.

Gas Flow Rate Identification in Biomass Power Plants by Response Surface Method

The utilize of renewable energy sources becomes more crucial and fascinatingly, wider application of renewable energy devices at domestic, commercial and industrial levels is not only affect to stronger awareness but also significantly installed capacities. Moreover, biomass principally is in form of woods and converts to be energy for using by humans for a long time. Gasification is a process of conversion of solid carbonaceous fuel into combustible gas by partial combustion. Many gasified models have various operating conditions because the parameters kept in each model are differentiated. This study applied the experimental data including three inputs variables including biomass consumption; temperature at combustion zone and ash discharge rate and gas flow rate as only one output variable. In this paper, response surface methods were applied for identification of the gasified system equation suitable for experimental data. The result showed that linear model gave superlative results.

Investigations of Free-to-Roll Motions and its Active Control under Pitch-up Maneuvers

Experiments have been carried out at sub-critical Reynolds number to investigate free-to-roll motions induced by forebody and/or wings complex flow on a 30° swept back nonslender wings-slender body-model for static and dynamic (pitch-up) cases. For the dynamic (pitch-up) case it has been observed that roll amplitude decreases and lag increases with increase in pitching speed. Decrease in roll amplitude with increase in pitch rate is attributed to low disturbing rolling moment due to weaker interaction between forebody and wing flow components. Asymmetric forebody vortices dominate and control the roll motion of the model in dynamic case when non-dimensional pitch rate ≥ 1x10-2. Effectiveness of the active control scheme utilizing rotating nose with artificial tip perturbation is observed to be low in the angle of attack region where the complex flow over the wings has contributions from both forebody and wings.

The Effects of Processing and Preservation on the Sensory Qualities of Prickly Pear Juice

Prickly pear juice has received renewed attention with regard to the effects of processing and preservation on its sensory qualities (colour, taste, flavour, aroma, astringency, visual browning and overall acceptability). Juice was prepared by homogenizing fruit and treating the pulp with pectinase (Aspergillus niger). Juice treatments applied were sugar addition, acidification, heat-treatment, refrigeration, and freezing and thawing. Prickly pear pulp and juice had unique properties (low pH 3.88, soluble solids 3.68 oBrix and high titratable acidity 0.47). Sensory profiling and descriptive analyses revealed that non-treated juice had a bitter taste with high astringency whereas treated prickly pear was significantly sweeter. All treated juices had a good sensory acceptance with values approximating or exceeding 7. Regression analysis of the consumer sensory attributes for non-treated prickly pear juice indicated an overwhelming rejection, while treated prickly pear juice received overall acceptability. Thus, educed favourable sensory responses and may have positive implications for consumer acceptability.

Modeling the Fischer-Tropsch Reaction In a Slurry Bubble Column Reactor

Fischer-Tropsch synthesis is one of the most important catalytic reactions that convert the synthetic gas to light and heavy hydrocarbons. One of the main issues is selecting the type of reactor. The slurry bubble reactor is suitable choice for Fischer- Tropsch synthesis because of its good qualification to transfer heat and mass, high durability of catalyst, low cost maintenance and repair. The more common catalysts for Fischer-Tropsch synthesis are Iron-based and Cobalt-based catalysts, the advantage of these catalysts on each other depends on which type of hydrocarbons we desire to produce. In this study, Fischer-Tropsch synthesis is modeled with Iron and Cobalt catalysts in a slurry bubble reactor considering mass and momentum balance and the hydrodynamic relations effect on the reactor behavior. Profiles of reactant conversion and reactant concentration in gas and liquid phases were determined as the functions of residence time in the reactor. The effects of temperature, pressure, liquid velocity, reactor diameter, catalyst diameter, gasliquid and liquid-solid mass transfer coefficients and kinetic coefficients on the reactant conversion have been studied. With 5% increase of liquid velocity (with Iron catalyst), H2 conversions increase about 6% and CO conversion increase about 4%, With 8% increase of liquid velocity (with Cobalt catalyst), H2 conversions increase about 26% and CO conversion increase about 4%. With 20% increase of gas-liquid mass transfer coefficient (with Iron catalyst), H2 conversions increase about 12% and CO conversion increase about 10% and with Cobalt catalyst H2 conversions increase about 10% and CO conversion increase about 6%. Results show that the process is sensitive to gas-liquid mass transfer coefficient and optimum condition operation occurs in maximum possible liquid velocity. This velocity must be more than minimum fluidization velocity and less than terminal velocity in such a way that avoid catalysts particles from leaving the fluidized bed.

C@sa: Intelligent Home Control and Simulation

In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.

Intellectual Capital and Competitive Advantage: An Analysis of the Biotechnology Industry

Intellectual capital measurement is a central aspect of knowledge management. The measurement and the evaluation of intangible assets play a key role in allowing an effective management of these assets as sources of competitiveness. For these reasons, managers and practitioners need conceptual and analytical tools taking into account the unique characteristics and economic significance of Intellectual Capital. Following this lead, we propose an efficiency and productivity analysis of Intellectual Capital, as a determinant factor of the company competitive advantage. The analysis is carried out by means of Data Envelopment Analysis (DEA) and Malmquist Productivity Index (MPI). These techniques identify Bests Practice companies that have accomplished competitive advantage implementing successful strategies of Intellectual Capital management, and offer to inefficient companies development paths by means of benchmarking. The proposed methodology is employed on the Biotechnology industry in the period 2007-2010.

Enhancements in Blended e-Learning Management System

A learning management system (commonly abbreviated as LMS) is a software application for the administration, documentation, tracking, and reporting of training programs, classroom and online events, e-learning programs, and training content (Ellis 2009). (Hall 2003) defines an LMS as \"software that automates the administration of training events. All Learning Management Systems manage the log-in of registered users, manage course catalogs, record data from learners, and provide reports to management\". Evidence of the worldwide spread of e-learning in recent years is easy to obtain. In April 2003, no fewer than 66,000 fully online courses and 1,200 complete online programs were listed on the TeleCampus portal from TeleEducation (Paulsen 2003). In the report \" The US market in the Self-paced eLearning Products and Services:2010-2015 Forecast and Analysis\" The number of student taken classes exclusively online will be nearly equal (1% less) to the number taken classes exclusively in physical campuses. Number of student taken online course will increase from 1.37 million in 2010 to 3.86 million in 2015 in USA. In another report by The Sloan Consortium three-quarters of institutions report that the economic downturn has increased demand for online courses and programs.

Performance and Economic Evaluation of a Hybrid Photovoltaic/Thermal Solar System in Northern China

A hybrid Photovoltaic/Thermal (PV/T) solar system integrates photovoltaic and solar thermal technologies into one single solar energy device, with dual generation of electricity and heat energy. The aim of the present study is to evaluate the potential for introduction of the PV/T technology into Northern China. For this purpose, outdoor experiments were conducted on a prototype of a PV/T water-heating system. The annual thermal and electrical performances were investigated under the climatic conditions of Beijing. An economic analysis of the system was then carried out, followed by a sensitivity study. The analysis revealed that the hybrid system is not economically attractive with the current market and energy prices. However, considering the continuous commitment of the Chinese government towards policy development in the renewable energy sector, and technological improvements like the increasing cost-effectiveness of PV cells, PV/Thermal technology may become economically viable in the near future.

Regular Data Broadcasting Plan with Grouping in Wireless Mobile Environment

The broadcast problem including the plan design is considered. The data are inserted and numbered at predefined order into customized size relations. The server ability to create a full, regular Broadcast Plan (RBP) with single and multiple channels after some data transformations is examined. The Regular Geometric Algorithm (RGA) prepares a RBP and enables the users to catch their items avoiding energy waste of their devices. Moreover, the Grouping Dimensioning Algorithm (GDA) based on integrated relations can guarantee the discrimination of services with a minimum number of channels. This last property among the selfmonitoring, self-organizing, can be offered by servers today providing also channel availability and less energy consumption by using smaller number of channels. Simulation results are provided.

Roundabout Optimal Entry and Circulating Flow Induced by Road Hump

Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.

Evolution of Quality Function Deployment (QFD) via Fuzzy Concepts and Neural Networks

Quality Function Deployment (QFD) is an expounded, multi-step planning method for delivering commodity, services, and processes to customers, both external and internal to an organization. It is a way to convert between the diverse customer languages expressing demands (Voice of the Customer), and the organization-s languages expressing results that sate those demands. The policy is to establish one or more matrices that inter-relate producer and consumer reciprocal expectations. Due to its visual presence is called the “House of Quality" (HOQ). In this paper, we assumed HOQ in multi attribute decision making (MADM) pattern and through a proposed MADM method, rank technical specifications. Thereafter compute satisfaction degree of customer requirements and for it, we apply vagueness and uncertainty conditions in decision making by fuzzy set theory. This approach would propound supervised neural network (perceptron) for MADM problem solving.

Neural Network Based Determination of Splice Junctions by ROC Analysis

Gene, principal unit of inheritance, is an ordered sequence of nucleotides. The genes of eukaryotic organisms include alternating segments of exons and introns. The region of Deoxyribonucleic acid (DNA) within a gene containing instructions for coding a protein is called exon. On the other hand, non-coding regions called introns are another part of DNA that regulates gene expression by removing from the messenger Ribonucleic acid (RNA) in a splicing process. This paper proposes to determine splice junctions that are exon-intron boundaries by analyzing DNA sequences. A splice junction can be either exon-intron (EI) or intron exon (IE). Because of the popularity and compatibility of the artificial neural network (ANN) in genetic fields; various ANN models are applied in this research. Multi-layer Perceptron (MLP), Radial Basis Function (RBF) and Generalized Regression Neural Networks (GRNN) are used to analyze and detect the splice junctions of gene sequences. 10-fold cross validation is used to demonstrate the accuracy of networks. The real performances of these networks are found by applying Receiver Operating Characteristic (ROC) analysis.

A Study of Panel Logit Model and Adaptive Neuro-Fuzzy Inference System in the Prediction of Financial Distress Periods

The purpose of this paper is to present two different approaches of financial distress pre-warning models appropriate for risk supervisors, investors and policy makers. We examine a sample of the financial institutions and electronic companies of Taiwan Security Exchange (TSE) market from 2002 through 2008. We present a binary logistic regression with paned data analysis. With the pooled binary logistic regression we build a model including more variables in the regression than with random effects, while the in-sample and out-sample forecasting performance is higher in random effects estimation than in pooled regression. On the other hand we estimate an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell (Gbell) functions and we find that ANFIS outperforms significant Logit regressions in both in-sample and out-of-sample periods, indicating that ANFIS is a more appropriate tool for financial risk managers and for the economic policy makers in central banks and national statistical services.

Production of Carbon Nanotubes by Iron Catalyst

Carbon nanotubes (CNTs) with their high mechanical, electrical, thermal and chemical properties are regarded as promising materials for many different potential applications. Having unique properties they can be used in a wide range of fields such as electronic devices, electrodes, drug delivery systems, hydrogen storage, textile etc. Catalytic chemical vapor deposition (CCVD) is a common method for CNT production especially for mass production. Catalysts impregnated on a suitable substrate are important for production with chemical vapor deposition (CVD) method. Iron catalyst and MgO substrate is one of most common catalyst-substrate combination used for CNT. In this study, CNTs were produced by CCVD of acetylene (C2H2) on magnesium oxide (MgO) powder substrate impregnated by iron nitrate (Fe(NO3)3•9H2O) solution. The CNT synthesis conditions were as follows: at synthesis temperatures of 500 and 800°C multiwall and single wall CNTs were produced respectively. Iron (Fe) catalysts were prepared by with Fe:MgO ratio of 1:100, 5:100 and 10:100. The duration of syntheses were 30 and 60 minutes for all temperatures and catalyst percentages. The synthesized materials were characterized by thermal gravimetric analysis (TGA), transmission electron microscopy (TEM) and Raman spectroscopy.