Abstract: The Kowsar dam supply water for different usages
such as drinking, industrial, agricultural and aquaculture farms
usages and located next to the city of Dehdashat in Kohgiluye and
Boyerahmad province in southern Iran. There are some towns and
villages on the Kowsar dam watersheds, which Dehdasht and
Choram are the most important and populated cities in this area. The
study was undertaken to assess the status of water quality in the
urban areas of the Kowsar dam. A total of 28 water samples were
collected from 6 stations on surface water and 1 station from
groundwater on the watershed of the Kowsar dam. All the samples
were analyzed for Cd concentration using standard procedures. The
results were compared with other national and international
standards. Among the analyzed samples, as the maximum value of
cadmium (1.131 μg/L) was observed on the station 2 at the winter
2009, all the samples analyzed were within the maximum admissible
limits by the United States Environmental Protection Agency, EU,
WHO, New Zealand , Australian, Iranian, and the Indian standards.
In general results of the present study have shown that Cd mean
values of stations No. 4, 1 and 2 with 0.5135, 0.0.4733 and 0.4573
μg/L respectively are higher than the other stations . Although Cd
level of all samples and stations have had normal values but this is
an indication of pollution potential and hazards because of human
activity and waste water of towns in the areas, which can effect on
human health implications in future. This research, therefore,
recommends the government and other responsible authorities to take
suitable improving measures in the Kowsar dam watershed-s.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
Abstract: The main mission of Ezilla is to provide a friendly
interface to access the virtual machine and quickly deploy the high
performance computing environment. Ezilla has been developed by
Pervasive Computing Team at National Center for High-performance
Computing (NCHC). Ezilla integrates the Cloud middleware,
virtualization technology, and Web-based Operating System (WebOS)
to form a virtual computer in distributed computing environment. In
order to upgrade the dataset and speedup, we proposed the sensor
observation system to deal with a huge amount of data in the
Cassandra database. The sensor observation system is based on the
Ezilla to store sensor raw data into distributed database. We adopt the
Ezilla Cloud service to create virtual machines and login into virtual
machine to deploy the sensor observation system. Integrating the
sensor observation system with Ezilla is to quickly deploy experiment
environment and access a huge amount of data with distributed
database that support the replication mechanism to protect the data
security.
Abstract: This paper presents Faults Forecasting System (FFS)
that utilizes statistical forecasting techniques in analyzing process
variables data in order to forecast faults occurrences. FFS is
proposing new idea in detecting faults. Current techniques used in
faults detection are based on analyzing the current status of the
system variables in order to check if the current status is fault or not.
FFS is using forecasting techniques to predict future timing for faults
before it happens. Proposed model is applying subset modeling
strategy and Bayesian approach in order to decrease dimensionality
of the process variables and improve faults forecasting accuracy. A
practical experiment, designed and implemented in Okayama
University, Japan, is implemented, and the comparison shows that
our proposed model is showing high forecasting accuracy and
BEFORE-TIME.
Abstract: The paper presents a part of the results obtained in a
complex research project on Romanian Grey Steppe breed, owner of
some remarkable qualities such as hardiness, longevity, adaptability,
special resistance to ban weather and diseases and included in the
genetic fund (G.D. no. 822/2008.) from Romania.
Following the researches effectuated, we identified alleles of six
loci, codifying the six types of major milk proteins: alpha-casein S1
(α S1-cz); beta-casein (β-cz); kappa-casein (K-cz); beta-lactoglobulin
(β-lg); alpha-lactalbumin (α-la) and alpha-casein S2 (α S2-cz). In
system αS1-cz allele αs1-Cn B has the highest frequency (0.700), in
system β-cz allele β-Cn A2 ( 0.550 ), in system K-cz allele k-CnA2 (
0.583 ) and heterozygote genotype AB ( 0.416 ) and BB (0.375), in
system β-lg allele β-lgA1 has the highest frequency (0.542 ) and
heterozygote genotype AB ( 0.500 ), in system α-la there is
monomorphism for allele α-la B and similarly in system αS2-cz for
allele αs2-Cn A.
The milk analysis by the isoelectric focalization technique (I.E.F.)
allowed the identification of a new allele for locus αS1-casein, for two
of the individuals under analysis, namely allele called αS1-casein
IRV. When experiments were repeated, we noticed that this is not a
proteolysis band and it really was a new allele that has not been
registered in the specialized literature so far. We identified two
heterozygote individuals, carriers of this allele, namely: BIRV and
CIRV. This discovery is extremely important if focus is laid on the
national genetic patrimony.
Abstract: The draw solute separation process in Forward
Osmosis desalination was simulated in Aspen Plus chemical process
modeling software, to estimate the energy consumption and compare
it with other desalination processes, mainly the Reverse Osmosis
process which is currently most prevalent. The electrolytic chemistry
for the system was retrieved using the Elec – NRTL property method
in the Aspen Plus database. Electrical equivalent of energy required
in the Forward Osmosis desalination technique was estimated and
compared with the prevalent desalination techniques.
Abstract: In this paper, we present a vertical nanowire thin film transistor with gate-all-around architecture, fabricated using CMOS compatible processes. A novel method of fabricating polysilicon vertical nanowires of diameter as small as 30 nm using wet-etch is presented. Both n-type and p-type vertical poly-silicon nanowire transistors exhibit superior electrical characteristics as compared to planar devices. On a poly-crystalline nanowire of 30 nm diameter, high Ion/Ioff ratio of 106, low drain-induced barrier lowering (DIBL) of 50 mV/V, and low sub-threshold slope SS~100mV/dec are demonstrated for a device with channel length of 100 nm.
Abstract: This paper deals with the design, development & implementation of a temperature sensor using zigbee. The main aim of the work undertaken in this paper is to sense the temperature and to display the result on the LCD using the zigbee technology. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10 % of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2 %. However, actual code sizes are much higher, more like 50 % of the Bluetooth code size. ZigBee chip vendors have announced 128-kilobyte devices. In this work undertaken in the design & development of the temperature sensor, it senses the temperature and after amplification is then fed to the micro controller, this is then connected to the zigbee module, which transmits the data and at the other end the zigbee reads the data and displays on to the LCD. The software developed is highly accurate and works at a very high speed. The method developed shows the effectiveness of the scheme employed.
Abstract: This paper proposes new algorithms for the computeraided
design and manufacture (CAD/CAM) of 3D woven multi-layer
textile structures. Existing commercial CAD/CAM systems are often
restricted to the design and manufacture of 2D weaves. Those
CAD/CAM systems that do support the design and manufacture of
3D multi-layer weaves are often limited to manual editing of design
paper grids on the computer display and weave retrieval from stored
archives. This complex design activity is time-consuming, tedious
and error-prone and requires considerable experience and skill of a
technical weaver. Recent research reported in the literature has
addressed some of the shortcomings of commercial 3D multi-layer
weave CAD/CAM systems. However, earlier research results have
shown the need for further work on weave specification, weave
generation, yarn path editing and layer binding. Analysis of 3D
multi-layer weaves in this research has led to the design and
development of efficient and robust algorithms for the CAD/CAM of
3D woven multi-layer textile structures. The resulting algorithmically
generated weave designs can be used as a basis for lifting plans that
can be loaded onto looms equipped with electronic shedding
mechanisms for the CAM of 3D woven multi-layer textile structures.
Abstract: The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.
Abstract: Real-time 3D applications have to guarantee
interactive rendering speed. There is a restriction for the number of
polygons which is rendered due to performance of a graphics hardware
or graphics algorithms. Generally, the rendering performance will be
drastically increased when handling only the dynamic 3d models,
which is much fewer than the static ones. Since shapes and colors of
the static objects don-t change when the viewing direction is fixed, the
information can be reused. We render huge amounts of polygon those
cannot handled by conventional rendering techniques in real-time by
using a static object image and merging it with rendering result of the
dynamic objects. The performance must be decreased as a
consequence of updating the static object image including removing
an static object that starts to move, re-rending the other static objects
being overlapped by the moving ones. Based on visibility of the object
beginning to move, we can skip the updating process. As a result, we
enhance rendering performance and reduce differences of rendering
speed between each frame. Proposed method renders total
200,000,000 polygons that consist of 500,000 dynamic polygons and
the rest are static polygons in about 100 frames per second.
Abstract: In the current research, we present an operation framework and protection mechanism to facilitate secure environment to protect mobile agents against tampering. The system depends on the presence of an authentication authority. The advantage of the proposed system is that security measures is an integral part of the design, thus common security retrofitting problems do not arise. This is due to the presence of AlGamal encryption mechanism to protect its confidential content and any collected data by the agent from the visited host . So that eavesdropping on information from the agent is no longer possible to reveal any confidential information. Also the inherent security constraints within the framework allow the system to operate as an intrusion detection system for any mobile agent environment. The mechanism is tested for most of the well known severe attacks against agents and networked systems. The scheme proved a promising performance that makes it very much recommended for the types of transactions that needs highly secure environments, e. g., business to business.
Abstract: The authors present a mixed method for reducing the order of the large-scale dynamic systems. In this method, the denominator polynomial of the reduced order model is obtained by using the modified pole clustering technique while the coefficients of the numerator are obtained by Pade approximations. This method is conceptually simple and always generates stable reduced models if the original high-order system is stable. The proposed method is illustrated with the help of the numerical examples taken from the literature.
Abstract: In this paper, the modelling and design of artificial neural network architecture for load forecasting purposes is investigated. The primary pre-requisite for power system planning is to arrive at realistic estimates of future demand of power, which is known as Load Forecasting. Short Term Load Forecasting (STLF) helps in determining the economic, reliable and secure operating strategies for power system. The dependence of load on several factors makes the load forecasting a very challenging job. An over estimation of the load may cause premature investment and unnecessary blocking of the capital where as under estimation of load may result in shortage of equipment and circuits. It is always better to plan the system for the load slightly higher than expected one so that no exigency may arise. In this paper, a load-forecasting model is proposed using a multilayer neural network with an appropriately modified back propagation learning algorithm. Once the neural network model is designed and trained, it can forecast the load of the power system 24 hours ahead on daily basis and can also forecast the cumulative load on daily basis. The real load data that is used for the Artificial Neural Network training was taken from LDC, Gujarat Electricity Board, Jambuva, Gujarat, India. The results show that the load forecasting of the ANN model follows the actual load pattern more accurately throughout the forecasted period.
Abstract: A geothermal power plant multiple simulator for
operators training is presented. The simulator is designed to be
installed in a wireless local area network and has a capacity to train
one to six operators simultaneously, each one with an independent
simulation session. The sessions must be supervised only by one
instructor. The main parts of this multiple simulator are: instructor
and operator-s stations. On the instructor station, the instructor
controls the simulation sessions, establishes training exercises and
supervises each power plant operator in individual way. This station
is hosted in a Main Personal Computer (NS) and its main functions
are: to set initial conditions, snapshots, malfunctions or faults,
monitoring trends, and process and soft-panel diagrams. On the other
hand the operators carry out their actions over the power plant
simulated on the operator-s stations; each one is also hosted in a PC.
The main software of instructor and operator-s stations are executed
on the same NS and displayed in PCs through graphical Interactive
Process Diagrams (IDP). The geothermal multiple simulator has been
installed in the Geothermal Simulation Training Center (GSTC) of
the Comisi├│n Federal de Electricidad, (Federal Commission of
Electricity, CFE), Mexico, and is being utilized as a part of the
training courses for geothermal power plant operators.
Abstract: Environmental awareness and depletion of the
petroleum resources are among vital factors that motivate a number
of researchers to explore the potential of reusing natural fiber as an
alternative composite material in industries such as packaging,
automotive and building constructions. Natural fibers are available in
abundance, low cost, lightweight polymer composite and most
importance its biodegradability features, which often called “ecofriendly"
materials. However, their applications are still limited due
to several factors like moisture absorption, poor wettability and large
scattering in mechanical properties. Among the main challenges on
natural fibers reinforced matrices composite is their inclination to
entangle and form fibers agglomerates during processing due to
fiber-fiber interaction. This tends to prevent better dispersion of the
fibers into the matrix, resulting in poor interfacial adhesion between
the hydrophobic matrix and the hydrophilic reinforced natural fiber.
Therefore, to overcome this challenge, fiber treatment process is one
common alternative that can be use to modify the fiber surface
topology by chemically, physically or mechanically technique.
Nevertheless, this paper attempt to focus on the effect of
mercerization treatment on mechanical properties enhancement of
natural fiber reinforced composite or so-called bio composite. It
specifically discussed on mercerization parameters, and natural fiber
reinforced composite mechanical properties enhancement.
Abstract: The article deals with development, design and
implementation of a mathematical model of the human respiratory
system. The model is designed in order to simulate distribution of
important intrapulmonary parameters along the bronchial tree such as
pressure amplitude, tidal volume and effect of regional mechanical
lung properties upon the efficiency of various ventilatory techniques.
Therefore exact agreement of the model structure with the lung
anatomical structure is required. The model is based on the lung
morphology and electro-acoustic analogy is used to design the
model.
Abstract: The control of commutation of switched reluctance
(SR) motor has nominally depended on a physical position detector.
The physical rotor position sensor limits robustness and increases
size and inertia of the SR drive system. The paper describes a method
to overcome these limitations by using magnetization characteristics
of the motor to indicate rotor and stator teeth overlap status. The
method is using active current probing pulses of same magnitude that
is used to simulate flux linkage in the winding being probed. A
microprocessor is used for processing magnetization data to deduce
rotor-stator teeth overlap status and hence rotor position. However,
the back-of-core saturation and mutual coupling introduces overlap
detection errors, hence that of commutation control. This paper
presents the concept of the detection scheme and the effects of backof
core saturation.
Abstract: In the project FleGSens, a wireless sensor network
(WSN) for the surveillance of critical areas and properties is currently developed which incorporates mechanisms to ensure information
security. The intended prototype consists of 200 sensor nodes for
monitoring a 500m long land strip. The system is focused on ensuring
integrity and authenticity of generated alarms and availability in the
presence of an attacker who may even compromise a limited number
of sensor nodes. In this paper, two of the main protocols developed
in the project are presented, a tracking protocol to provide secure
detection of trespasses within the monitored area and a protocol for secure detection of node failures. Simulation results of networks
containing 200 and 2000 nodes as well as the results of the first prototype comprising a network of 16 nodes are presented. The focus of the simulations and prototype are functional testing of the protocols
and particularly demonstrating the impact and cost of several attacks.