Abstract: In this paper we discuss the problems of the long-term management policy of Lake Peipsi and the roles of natural and anthropogenic factors in the ecological state of the lake. The reduction of the pollution during the last 15 years could not give significant changes of the chemical composition of the water, what implicates the essential role that natural factors have on the ecological state of lake. One of the most important factors having impact on the hydrochemical cycles and ecological state is the hydrological regime which is clearly expressed in L. Peipsi. The absence on clear interrelations of climate cycles and nutrients suggest that complex abiotic and biotic interactions, which take place in the lake ecosystem, plays a significant role in the matter circulation mechanism within lake.
Abstract: During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: Inter-organizational Workflow (IOW) is commonly
used to support the collaboration between heterogeneous and
distributed business processes of different autonomous organizations
in order to achieve a common goal. E-government is considered as an
application field of IOW. The coordination of the different
organizations is the fundamental problem in IOW and remains the
major cause of failure in e-government projects. In this paper, we
introduce a new coordination model for IOW that improves the
collaboration between government administrations and that respects
IOW requirements applied to e-government. For this purpose, we
adopt a Multi-Agent approach, which deals more easily with interorganizational
digital government characteristics: distribution,
heterogeneity and autonomy. Our model integrates also different
technologies to deal with the semantic and technologic
interoperability. Moreover, it conserves the existing systems of
government administrations by offering a distributed coordination
based on interfaces communication. This is especially applied in
developing countries, where administrations are not necessary
equipped with workflow systems. The use of our coordination
techniques allows an easier migration for an e-government solution
and with a lower cost. To illustrate the applicability of the proposed
model, we present a case study of an identity card creation in Tunisia.
Abstract: Increasing energy absorption is a significant parameter
in vehicle design. Absorbing more energy results in decreasing
occupant damage. Limitation of the deflection in a side impact results
in decreased energy absorption (SEA) and increased peak load (PL).
Hence a high crash force jeopardizes passenger safety and vehicle
integrity. The aims of this paper are to determine suitable dimensions
and material of a square beam subjected to side impact, in order to
maximize SEA and minimize PL. To achieve this novel goal, the
geometric parameters of a square beam are optimized using the
response surface method (RSM).multi-objective optimization is
performed, and the optimum design for different response features is
obtained.
Abstract: Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.
Abstract: This paper presents the idea of a rough controller with application to control the overhead traveling crane system. The structure of such a controller is based on a suggested concept of a fuzzy logic controller. A measure of fuzziness in rough sets is introduced. A comparison between fuzzy logic controller and rough controller has been demonstrated. The results of a simulation comparing the performance of both controllers are shown. From these results we infer that the performance of the proposed rough controller is satisfactory.
Abstract: In this research, CaO-ZnO catalysts (with various
Ca:Zn atomic ratios of 1:5, 1:3, 1:1, and 3:1) prepared by incipientwetness
impregnation (IWI) and co-precipitation (CP) methods were
used as a catalyst in the transesterification of palm oil with methanol
for biodiesel production. The catalysts were characterized by several
techniques, including BET method, CO2-TPD, and Hemmett
Indicator. The effects of precursor concentration, and calcination
temperature on the catalytic performance were studied under reaction
conditions of a 15:1 methanol to oil molar ratio, 6 wt% catalyst,
reaction temperature of 60°C, and reaction time of 8 h. At Ca:Zn
atomic ratio of 1:3 gave the highest FAME value owing to a basic
properties and surface area of the prepared catalyst.
Abstract: Future space vehicles will require the use of non-toxic, cryogenic propellants, because of the performance advantages over the toxic hypergolic propellants and also because of the environmental and handling concerns. A prototypical capillary flow liquid acquisition device (LAD) for cryogenic propellants was fabricated with a mesh screen, covering a rectangular flow channel with a cylindrical outlet tube, and was tested with liquid oxygen (LOX). In order to better understand the performance in various gravity environments and orientations with different submersion depths of the LAD, a series of computational fluid dynamics (CFD) simulations of LOX flow through the LAD screen channel, including horizontally and vertically submersions of the LAD channel assembly at normal gravity environment was conducted. Gravity effects on the flow field in LAD channel are inspected and analyzed through comparing the simulations.
Abstract: NFκB is a transcription factor regulating many
function of the vessel wall. In the normal condition , NFκB is
revealed diffuse cytoplasmic expressionsuggesting that the system is
inactive. The presence of activation NFκB provide a potential
pathway for the rapid transcriptional of a variety of genes encoding
cytokines, growth factors, adhesion molecules and procoagulatory
factors. It is likely to play an important role in chronic inflamatory
disease involved atherosclerosis. There are many stimuli with the
potential to active NFκB, including hyperlipidemia. We used 24 mice
which was divided in 6 groups. The HFD given by et libitum
procedure during 2, 4, and 6 months. The parameters in this study
were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as
a product of NFKB activation. H2O2 colorimetryc assay performed
directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1
detection obtained from aorta mice, measured by ELISA kit and
imunohistochemistry. There was a significant difference activation of
H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6
months. It suggest that HFD induce ROS formation and increase the
activation of NFKB as one of atherosclerosis marker that caused by
hyperlipidemia as classical atheroschlerosis risk factor.
Abstract: Fecal coliform bacteria are widely used as indicators of
sewage contamination in surface water. However, there are some
disadvantages in these microbial techniques including time consuming
(18-48h) and inability in discriminating between human and animal
fecal material sources. Therefore, it is necessary to seek a more
specific indicator of human sanitary waste. In this study, the feasibility
was investigated to apply caffeine and human pharmaceutical
compounds to identify the human-source contamination. The
correlation between caffeine and fecal coliform was also explored.
Surface water samples were collected from upstream, middle-stream
and downstream points respectively, along Rochor Canal, as well as 8
locations of Marina Bay. Results indicate that caffeine is a suitable
chemical tracer in Singapore because of its easy detection (in the range
of 0.30-2.0 ng/mL), compared with other chemicals monitored.
Relative low concentrations of human pharmaceutical compounds (<
0.07 ng/mL) in Rochor Canal and Marina Bay water samples make
them hard to be detected and difficult to be chemical tracer. However,
their existence can help to validate sewage contamination. In addition,
it was discovered the high correlation exists between caffeine
concentration and fecal coliform density in the Rochor Canal water
samples, demonstrating that caffeine is highly related to the
human-source contamination.
Abstract: Ensemble learning algorithms such as AdaBoost and
Bagging have been in active research and shown improvements in
classification results for several benchmarking data sets with mainly
decision trees as their base classifiers. In this paper we experiment to
apply these Meta learning techniques with classifiers such as random
forests, neural networks and support vector machines. The data sets
are from MAGIC, a Cherenkov telescope experiment. The task is to
classify gamma signals from overwhelmingly hadron and muon
signals representing a rare class classification problem. We compare
the individual classifiers with their ensemble counterparts and
discuss the results. WEKA a wonderful tool for machine learning has
been used for making the experiments.
Abstract: In developing a text-to-speech system, it is well
known that the accuracy of information extracted from a text is
crucial to produce high quality synthesized speech. In this paper, a
new scheme for converting text into its equivalent phonetic spelling
is introduced and developed. This method is applicable to many
applications in text to speech converting systems and has many
advantages over other methods. The proposed method can also
complement the other methods with a purpose of improving their
performance. The proposed method is a probabilistic model and is
based on Smooth Ergodic Hidden Markov Model. This model can be
considered as an extension to HMM. The proposed method is applied
to Persian language and its accuracy in converting text to speech
phonetics is evaluated using simulations.
Abstract: The aim of the research is to understand whether the accuracy of customer detection of employee emotional labor strategy would influence the overall service satisfaction. From path analysis, it was found that employee-s positive emotions positively influenced service quality. Service quality in turn influenced Customer detection of employee emotional deep action strategy and Customer detection of employee emotional surface action strategy. Lastly, Customer detection of employee emotional deep action strategy and Customer detection of employee emotional surface action strategy positively influenced service satisfaction. Based on the analysis results, suggestions are proposed to provide reference for human resource management and use in relative fields.
Abstract: Dual phase steels (DPS)s have a microstructure
consisting of a hard second phase called Martensite in the soft Ferrite
matrix. In recent years, there has been interest in dual-phase steels,
because the application of these materials has made significant usage;
particularly in the automotive sector Composite microstructure of
(DPS)s exhibit interesting characteristic mechanical properties such
as continuous yielding, low yield stress to tensile strength
ratios(YS/UTS), and relatively high formability; which offer
advantages compared with conventional high strength low alloy
steels(HSLAS). The research dealt with the characterization of
damage in (DPS)s. In this study by review the mechanisms of failure
due to volume fraction of martensite second phase; a new method is
introduced to identifying the mechanisms of failure in the various
phases of these types of steels. In this method the acoustic emission
(AE) technique was used to detect damage progression. These failure
mechanisms consist of Ferrite-Martensite interface decohesion and/or
martensite phase fracture. For this aim, dual phase steels with
different volume fraction of martensite second phase has provided by
various heat treatment methods on a low carbon steel (0.1% C), and
then AE monitoring is used during tensile test of these DPSs. From
AE measurements and an energy ratio curve elaborated from the
value of AE energy (it was obtained as the ratio between the strain
energy to the acoustic energy), that allows detecting important
events, corresponding to the sudden drops. These AE signals events
associated with various failure mechanisms are classified for ferrite
and (DPS)s with various amount of Vm and different martensite
morphology. It is found that AE energy increase with increasing Vm.
This increasing of AE energy is because of more contribution of
martensite fracture in the failure of samples with higher Vm. Final
results show a good relationship between the AE signals and the
mechanisms of failure.
Abstract: The hot deformation behavior of high strength low
alloy (HSLA) steels with different chemical compositions under hot
working conditions in the temperature range of 900 to 1100℃ and
strain rate range from 0.1 to 10 s-1 has been studied by performing a
series of hot compression tests. The dynamic materials model has been
employed for developing the processing maps, which show variation
of the efficiency of power dissipation with temperature and strain rate.
Also the Kumar-s model has been used for developing the instability
map, which shows variation of the instability for plastic deformation
with temperature and strain rate. The efficiency of power dissipation
increased with decreasing strain rate and increasing temperature in the
steel with higher Cr and Ti content. High efficiency of power
dissipation over 20 % was obtained at a finite strain level of 0.1 under
the conditions of strain rate lower than 1 s-1 and temperature higher
than 1050 ℃ . Plastic instability was expected in the regime of
temperatures lower than 1000 ℃ and strain rate lower than 0.3 s-1. Steel
with lower Cr and Ti contents showed high efficiency of power
dissipation at higher strain rate and lower temperature conditions.
Abstract: Fair share is one of the scheduling objectives supported on many production systems. However, fair share has been shown to cause performance problems for some users, especially the users with difficult jobs. This work is focusing on extending goaloriented parallel computer job scheduling policies to cover the fair share objective. Goal-oriented parallel computer job scheduling policies have been shown to achieve good scheduling performances when conflicting objectives are required. Goal-oriented policies achieve such good performance by using anytime combinatorial search techniques to find a good compromised schedule within a time limit. The experimental results show that the proposed goal-oriented parallel computer job scheduling policy (namely Tradeofffs( Tw:avgX)) achieves good scheduling performances and also provides good fair share performance.
Abstract: The main objective of this project is to build an
autonomous microcontroller-based mobile robot for a local robot
soccer competition. The black competition field is equipped with
white lines to serve as the guidance path for competing robots. Two
prototypes of soccer robot embedded with the Basic Stamp II
microcontroller have been developed. Two servo motors are used as
the drive train for the first prototype whereas the second prototype
uses two DC motors as its drive train. To sense the lines, lightdependent
resistors (LDRs) supply the analog inputs for the
microcontroller. The performances of both prototypes are evaluated.
The DC motor-driven robot has produced better trajectory control
over the one using servo motors and has brought the team into the
final round.
Abstract: As seen in literature, about 70% of the improvement initiatives fail, and a significant number do not even get started. This paper analyses the problem of failing initiatives on Software Process Improvement (SPI), and proposes good practices supported by motivational tools that can help minimizing failures. It elaborates on the hypothesis that human factors are poorly addressed by deployers, especially because implementation guides usually emphasize only technical factors. This research was conducted with SPI deployers and analyses 32 SPI initiatives. The results indicate that although human factors are not commonly highlighted in guidelines, the successful initiatives usually address human factors implicitly. This research shows that practices based on human factors indeed perform a crucial role on successful implantations of SPI, proposes change management as a theoretical framework to introduce those practices in the SPI context and suggests some motivational tools based on SPI deployers experience to support it.
Abstract: This paper presents the applicability of artificial
neural networks for 24 hour ahead solar power generation forecasting
of a 20 kW photovoltaic system, the developed forecasting is suitable
for a reliable Microgrid energy management. In total four neural
networks were proposed, namely: multi-layred perceptron, radial
basis function, recurrent and a neural network ensemble consisting in
ensemble of bagged networks. Forecasting reliability of the proposed
neural networks was carried out in terms forecasting error
performance basing on statistical and graphical methods. The
experimental results showed that all the proposed networks achieved
an acceptable forecasting accuracy. In term of comparison the neural
network ensemble gives the highest precision forecasting comparing
to the conventional networks. In fact, each network of the ensemble
over-fits to some extent and leads to a diversity which enhances the
noise tolerance and the forecasting generalization performance
comparing to the conventional networks.