Abstract: Network-Centric Air Defense Missile Systems
(NCADMS) represents the superior development of the air defense
missile systems and has been regarded as one of the major research
issues in military domain at present. Due to lack of knowledge and
experience on NCADMS, modeling and simulation becomes an effective
approach to perform operational analysis, compared with
those equation based ones. However, the complex dynamic interactions
among entities and flexible architectures of NCADMS put forward
new requirements and challenges to the simulation framework
and models. ABS (Agent-Based Simulations) explicitly addresses
modeling behaviors of heterogeneous individuals. Agents have capability
to sense and understand things, make decisions, and act on the
environment. They can also cooperate with others dynamically to
perform the tasks assigned to them. ABS proves an effective approach
to explore the new operational characteristics emerging in
NCADMS. In this paper, based on the analysis of network-centric
architecture and new cooperative engagement strategies for
NCADMS, an agent-based simulation framework by expanding the
simulation framework in the so-called System Effectiveness Analysis
Simulation (SEAS) was designed. The simulation framework specifies
components, relationships and interactions between them, the
structure and behavior rules of an agent in NCADMS. Based on scenario
simulations, information and decision superiority and operational
advantages in NCADMS were analyzed; meanwhile some
suggestions were provided for its future development.
Abstract: A fuzzy classifier using multiple ellipsoids approximating decision regions for classification is to be designed in this paper. An algorithm called Gustafson-Kessel algorithm (GKA) with an adaptive distance norm based on covariance matrices of prototype data points is adopted to learn the ellipsoids. GKA is able toadapt the distance norm to the underlying distribution of the prototypedata points except that the sizes of ellipsoids need to be determined a priori. To overcome GKA's inability to determine appropriate size ofellipsoid, the genetic algorithm (GA) is applied to learn the size ofellipsoid. With GA combined with GKA, it will be shown in this paper that the proposed method outperforms the benchmark algorithms as well as algorithms in the field.
Abstract: The elimination of ranitidine (a pharmaceutical
compound) has been carried out in the presence of UV-C radiation.
After some preliminary experiments, it has been experienced the no
influence of the gas nature (air or oxygen) bubbled in photolytic
experiments. From simple photolysis experiments the quantum yield
of this compound has been determined. Two photolytic
approximation has been used, the linear source emission in parallel
planes and the point source emission in spherical planes. The
quantum yield obtained was in the proximity of 0.05 mol Einstein-1
regardless of the method used. Addition of free radical promoters
(hydrogen peroxide) increases the ranitidine removal rate while the
use of photocatalysts (TiO2) negatively affects the process.
Abstract: An actual power plant, which is the power plant of Iron and Steel Factory at Misurata city in Libya , has been modeled using Matlab in order to compare its results to the actual results of the actual cycle. This paper concentrates on two factors:
a- The comparison between exergy losses in the actual cycle and the modeled cycle.
b- The effect of extracting pressure on temperature water at boiler inlet.
Closed heat exchangers used in this plant have been substituted by open heat exchangers in the current study of the modeled power plant and the required changes in the pressure have been considered. In the following investigation the two points mentioned above are taken in consideration.
Abstract: Since large power transformers are the most
expensive and strategically important components of any power
generator and transmission system, their reliability is crucially
important for the energy system operation. Also, Circuit breakers are
very important elements in the power transmission line so monitoring
the events gives a knowledgebase to determine time to the next
maintenance. This paper deals with the introduction of the
comparative method of the state estimation of transformers and
Circuit breakers using continuous monitoring of voltage, current.
This paper gives details a new method based on wavelet to apparatus
insulation monitoring. In this paper to insulation monitoring of
transformer, a new method based on wavelet transformation and
neutral point analysis is proposed. Using the EMTP tools, fault in
transformer winding and the detailed transformer winding model
were simulated. The current of neutral point of winding was analyzed
by wavelet transformation. It is shown that the neutral current of the
transformer winding has useful information about fault in insulation
of the transformer.
Abstract: A spanning tree of a connected graph is a tree which
consists the set of vertices and some or perhaps all of the edges from
the connected graph. In this paper, a model for spanning tree
transformation of connected graphs into single-row networks, namely
Spanning Tree of Connected Graph Modeling (STCGM) will be
introduced. Path-Growing Tree-Forming algorithm applied with
Vertex-Prioritized is contained in the model to produce the spanning
tree from the connected graph. Paths are produced by Path-Growing
and they are combined into a spanning tree by Tree-Forming. The
spanning tree that is produced from the connected graph is then
transformed into single-row network using Tree Sequence Modeling
(TSM). Finally, the single-row routing problem is solved using a
method called Enhanced Simulated Annealing for Single-Row
Routing (ESSR).
Abstract: Among all geo-hydrological relationships, rainfallrunoff
relationship is of utmost importance in any hydrological
investigation and water resource planning. Spatial variation, lag time
involved in obtaining areal estimates for the basin as a whole can
affect the parameterization in design stage as well as in planning
stage. In conventional hydrological processing of data, spatial aspect
is either ignored or interpolated at sub-basin level. Temporal
variation when analysed for different stages can provide clues for its
spatial effectiveness. The interplay of space-time variation at pixel
level can provide better understanding of basin parameters.
Sustenance of design structures for different return periods and their
spatial auto-correlations should be studied at different geographical
scales for better management and planning of water resources.
In order to understand the relative effect of spatio-temporal
variation in hydrological data network, a detailed geo-hydrological
analysis of Betwa river catchment falling in Lower Yamuna Basin is
presented in this paper. Moreover, the exact estimates about the
availability of water in the Betwa river catchment, especially in the
wake of recent Betwa-Ken linkage project, need thorough scientific
investigation for better planning. Therefore, an attempt in this
direction is made here to analyse the existing hydrological and
meteorological data with the help of SPSS, GIS and MS-EXCEL
software. A comparison of spatial and temporal correlations at subcatchment
level in case of upper Betwa reaches has been made to
demonstrate the representativeness of rain gauges. First, flows at
different locations are used to derive correlation and regression
coefficients. Then, long-term normal water yield estimates based on
pixel-wise regression coefficients of rainfall-runoff relationship have
been mapped. The areal values obtained from these maps can
definitely improve upon estimates based on point-based
extrapolations or areal interpolations.
Abstract: The higher compounded growth rates coupled with
favourable demographics in emerging markets portend abundant
opportunities for multinational organizations. With many
organizations competing for talent in these growing markets, their
ability to succeed will depend on their understanding of local
workforce needs and aspirations. Using data from the Towers Watson
2010 Global Workforce Study, this paper highlights differences in
employee engagement, turnover risks, and attraction and retention
drivers between the two markets. Apart from looking at the
traditional drivers of employee engagement, the study also explores
the value placed by employees on elements like a strong senior
leadership, managerial capabilities and career advancement
opportunities. Results reveal that emerging markets employees seem
to be more engaged and value the non-traditional elements more
highly than the developed markets employees.
Abstract: Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain.
In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of cambered NACA 64-210 and symmetric
NACA 0012 airfoils. Our results show significant increase in drag and decrease in lift. We used preprocessing software gridgen for creation of geometry and mesh, used fluent as solver and techplot as postprocessor. Discrete phase modeling called DPM is used to model the rain particles using two phase flow approach. The rain particles are assumed to be inert.
Both airfoils showed significant decrease in lift and increase in drag in simulated rain environment. The most significant difference between these two airfoils was the NACA 64-210 more sensitivity than NACA 0012 to liquid water content (LWC). We believe that the results showed in this paper will be useful for the designer of the commercial aircrafts and UAVs, and will be helpful for training of the pilots to control the airplanes in heavy rain.
Abstract: Methanol-to-olefins (MTO) coupled with
transformation of coal or natural gas to methanol gives an interesting
and promising way to produce ethylene and propylene. To investigate
solid concentration in gas-solid fluidized bed for methanol-to-olefins
process catalyzed by SAPO-34, a cold model experiment system is
established in this paper. The system comprises a gas distributor in a
300mm internal diameter and 5000mm height acrylic column, the
fiber optic probe system and series of cyclones. The experiments are
carried out at ambient conditions and under different superficial gas
velocity ranging from 0.3930m/s to 0.7860m/s and different initial bed
height ranging from 600mm to 1200mm. The effects of radial
distance, axial distance, superficial gas velocity, initial bed height on
solid concentration in the bed are discussed. The effects of distributor
shape and porosity on solid concentration are also discussed. The
time-averaged solid concentration profiles under different conditions
are obtained.
Abstract: Ants are fascinating creatures that demonstrate the
ability to find food and bring it back to their nest. Their ability as a
colony, to find paths to food sources has inspired the development of
algorithms known as Ant Colony Systems (ACS). The principle of
cooperation forms the backbone of such algorithms, commonly used
to find solutions to problems such as the Traveling Salesman
Problem (TSP). Ants communicate to each other through chemical
substances called pheromones. Modeling individual ants- ability to
manipulate this substance can help an ACS find the best solution.
This paper introduces a Dynamic Ant Colony System with threelevel
updates (DACS3) that enhance an existing ACS. Experiments
were conducted to observe single ant behavior in a colony of
Malaysian House Red Ants. Such behavior was incorporated into the
DACS3 algorithm. We benchmark the performance of DACS3 versus
DACS on TSP instances ranging from 14 to 100 cities. The result
shows that the DACS3 algorithm can achieve shorter distance in
most cases and also performs considerably faster than DACS.
Abstract: The objective of this study is to investigate fire
behaviors, experimentally and numerically, in a scaled version of an
underground station. The effect of ventilation velocity on the fire is
examined. Fire experiments are simulated by burning 10 ml
isopropyl alcohol fuel in a fire pool with dimensions 5cm x 10cm x 4
mm at the center of 1/100 scaled underground station model. A
commercial CFD program FLUENT was used in numerical
simulations. For air flow simulations, k-ω SST turbulence model and
for combustion simulation, non-premixed combustion model are
used. This study showed that, the ventilation velocity is increased
from 1 m/s to 3 m/s the maximum temperature in the station is found
to be less for ventilation velocity of 1 m/s. The reason for these
experimental result lies on the relative dominance of oxygen supply
effect on cooling effect. Without piston effect, maximum temperature
occurs above the fuel pool. However, when the ventilation velocity
increased the flame was tilted in the direction of ventilation and the
location of maximum temperature moves along the flow direction.
The velocities measured experimentally in the station at different
locations are well matched by the CFD simulation results. The
prediction of general flow pattern is satisfactory with the smoke
visualization tests. The backlayering in velocity is well predicted by
CFD simulation. However, all over the station, the CFD simulations
predicted higher temperatures compared to experimental
measurements.
Abstract: In this paper we present modeling and simulation for
physical vapor deposition for metallic bipolar plates. In the models
we discuss the application of different models to simulate the
transport of chemical reactions of the gas species in the gas chamber.
The so called sputter process is an extremely sensitive process to
deposit thin layers to metallic plates. We have taken into account
lower order models to obtain first results with respect to the gas
fluxes and the kinetics in the chamber.
The model equations can be treated analytically in some
circumstances and complicated multi-dimensional models are solved
numerically with a software-package (UG unstructed grids, see [1]).
Because of multi-scaling and multi-physical behavior of the models,
we discuss adapted schemes to solve more accurate in the different
domains and scales. The results are discussed with physical
experiments to give a valid model for the assumed growth of thin
layers.
Abstract: Low temperature (LT) is one of the most abiotic
stresses causing loss of yield in wheat (T. aestivum). Four major
genes in wheat (Triticum aestivum L.) with the dominant alleles
designated Vrn–A1,Vrn–B1,Vrn–D1 and Vrn4, are known to have
large effects on the vernalization response, but the effects on cold
hardiness are ambiguous. Poor cold tolerance has restricted winter
wheat production in regions of high winter stress [9]. It was known
that nearly all wheat chromosomes [5] or at least 10 chromosomes of
21 chromosome pairs are important in winter hardiness [15]. The
objective of present study was to clarify the role of each chromosome
in cold tolerance. With this purpose we used 20 isogenic lines of
wheat. In each one of these isogenic lines only a chromosome from
‘Bezostaya’ variety (a winter habit cultivar) was substituted to
‘Capple desprez’ variety. The plant materials were planted in
controlled conditions with 20º C and 16 h day length in moderately
cold areas of Iran at Karaj Agricultural Research Station in 2006-07
and the acclimation period was completed for about 4 weeks in a
cold room with 4º C. The cold hardiness of these isogenic lines was
measured by LT50 (the temperature in which 50% of the plants are
killed by freezing stress).The experimental design was completely
randomized block design (RCBD)with three replicates. The results
showed that chromosome 5A had a major effect on freezing
tolerance, and then chromosomes 1A and 4A had less effect on this
trait. Further studies are essential to understanding the importance of
each chromosome in controlling cold hardiness in wheat.
Abstract: The aims of this study were to compare the
differences of being good membership behavior among faculties and
staffs of Suan Sunandha Rajabhat University with different sex, age,
income, education, marital status, and working period, and
investigate the relationships between organizational commitment and
being good membership behavior. The research methodology
employed a questionnaire as a quantitative method. The respondents
were 305 faculties and staffs of Suan Sunandha Rajabhat University.
This research used Percentage, Mean, Standard Deviation, t-test,
One-Way ANOVA Analysis of Variance, and Pearson’s Product
Moment Correlation Coefficient in data analysis. The results showed
that organizational commitment among faculties and staffs of Suan
Sunandha Rajabhat University was at a high level. In addition,
differences in sex, age, income, education, marital status, and
working period revealed differences in being good membership
behavior. The results also indicated that organizational commitment
was significantly related to being good membership behavior.
Abstract: Defining strategic position of the organizations within
the industry environment is one of the basic and most important
phases of strategic planning to which extent that one of the
fundamental schools of strategic planning is the strategic positioning
school. In today-s knowledge-based economy and dynamic
environment, it is essential for universities as the centers of
education, knowledge creation and knowledge worker evolvement.
Till now, variant models with different approaches to strategic
positioning are deployed in defining the strategic position within the
various industries. Balanced Scorecard as one of the powerful models
for strategic positioning, analyzes all aspects of the organization
evenly. In this paper with the consideration of BSC strength in
strategic evaluation, it is used for analyzing the environmental
position of the best-s Iranian Business Schools. The results could be
used in developing strategic plans for these schools as well as other
Iranian Management and Business Schools.
Abstract: Modern civilization has come in recent decades into a new phase in its development, called the information society. The concept of "information society" has become one of the most common. Therefore, the attempt to understand what exactly the society we live in, what are its essential features, and possible future scenarios, is important to the social and philosophical analysis. At the heart of all these deep transformations is more increasing, almost defining role knowledge and information as play substrata of «information society». The mankind opened for itself and actively exploits a new resource – information. Information society puts forward on the arena new type of the power, at the heart of which activity – mastering by a new resource: information and knowledge. The password of the new power – intelligence as synthesis of knowledge, information and communications, the strength of mind, fundamental sociocultural values. In a postindustrial society, the power of knowledge and information is crucial in the management of the company, pushing into the background the influence of money and state coercion.
Abstract: Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.
Abstract: Self-organizing map (SOM) is a well known data
reduction technique used in data mining. It can reveal structure in
data sets through data visualization that is otherwise hard to detect
from raw data alone. However, interpretation through visual
inspection is prone to errors and can be very tedious. There are
several techniques for the automatic detection of clusters of code
vectors found by SOM, but they generally do not take into account
the distribution of code vectors; this may lead to unsatisfactory
clustering and poor definition of cluster boundaries, particularly
where the density of data points is low. In this paper, we propose the
use of an adaptive heuristic particle swarm optimization (PSO)
algorithm for finding cluster boundaries directly from the code
vectors obtained from SOM. The application of our method to
several standard data sets demonstrates its feasibility. PSO algorithm
utilizes a so-called U-matrix of SOM to determine cluster boundaries;
the results of this novel automatic method compare very favorably to
boundary detection through traditional algorithms namely k-means
and hierarchical based approach which are normally used to interpret
the output of SOM.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.