Abstract: True integration of multimedia services over wired or
wireless networks increase the productivity and effectiveness in
today-s networks. IP Multimedia Subsystems are Next Generation
Network architecture to provide the multimedia services over fixed
or mobile networks. This paper proposes an extended SIP-based QoS
Management architecture for IMS services over underlying IP access
networks. To guarantee the end-to-end QoS for IMS services in
interconnection backbone, SIP based proxy Modules are introduced
to support the QoS provisioning and to reduce the handoff disruption
time over IP access networks. In our approach these SIP Modules
implement the combination of Diffserv and MPLS QoS mechanisms
to assure the guaranteed QoS for real-time multimedia services. To
guarantee QoS over access networks, SIP Modules make QoS
resource reservations in advance to provide best QoS to IMS users
over heterogeneous networks. To obtain more reliable multimedia
services, our approach allows the use of SCTP protocol over SIP
instead of UDP due to its multi-streaming feature. This architecture
enables QoS provisioning for IMS roaming users to differentiate IMS
network from other common IP networks for transmission of realtime
multimedia services. To validate our approach simulation
models are developed on short scale basis. The results show that our
approach yields comparable performance for efficient delivery of
IMS services over heterogeneous IP access networks.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: The UML modeling of complex distributed systems often is a great challenge due to the large amount of parallel real-time operating components. In this paper the problems of verification of such systems are discussed. ECPN, an Extended Colored Petri Net is defined to formally describe state transitions of components and interactions among components. The relationship between sequence diagrams and Free Choice Petri Nets is investigated. Free Choice Petri Net theory helps verifying the liveness of sequence diagrams. By converting sequence diagrams to ECPNs and then comparing behaviors of sequence diagram ECPNs and statecharts, the consistency among models is analyzed. Finally, a verification process for an example model is demonstrated.
Abstract: In the present study Schwertmannite (an iron oxide
hydroxide) is selected as an adsorbent for defluoridation of water.
The adsorbent was prepared by wet chemical process and was
characterized by SEM, XRD and BET. The fluoride adsorption
efficiency of the prepared adsorbent was determined with respect to
contact time, initial fluoride concentration, adsorbent dose and pH of
the solution. The batch adsorption data revealed that the fluoride
adsorption efficiency was highly influenced by the studied factors.
Equilibrium was attained within one hour of contact time indicating
fast kinetics and the adsorption data followed pseudo second order
kinetic model. Equilibrium isotherm data fitted to both Langmuir and
Freundlich isotherm models for a concentration range of 5-30 mg/L.
The adsorption system followed Langmuir isotherm model with
maximum adsorption capacity of 11.3 mg/g. The high adsorption
capacity of Schwertmannite points towards the potential of this
adsorbent for fluoride removal from aqueous medium.
Abstract: In developing a text-to-speech system, it is well
known that the accuracy of information extracted from a text is
crucial to produce high quality synthesized speech. In this paper, a
new scheme for converting text into its equivalent phonetic spelling
is introduced and developed. This method is applicable to many
applications in text to speech converting systems and has many
advantages over other methods. The proposed method can also
complement the other methods with a purpose of improving their
performance. The proposed method is a probabilistic model and is
based on Smooth Ergodic Hidden Markov Model. This model can be
considered as an extension to HMM. The proposed method is applied
to Persian language and its accuracy in converting text to speech
phonetics is evaluated using simulations.
Abstract: One of the most important requirements for the
operation and planning activities of an electrical utility is the
prediction of load for the next hour to several days out, known as
short term load forecasting. This paper presents the development of
an artificial neural network based short-term load forecasting model.
The model can forecast daily load profiles with a load time of one
day for next 24 hours. In this method can divide days of year with
using average temperature. Groups make according linearity rate of
curve. Ultimate forecast for each group obtain with considering
weekday and weekend. This paper investigates effects of temperature
and humidity on consuming curve. For forecasting load curve of
holidays at first forecast pick and valley and then the neural network
forecast is re-shaped with the new data. The ANN-based load models
are trained using hourly historical. Load data and daily historical
max/min temperature and humidity data. The results of testing the
system on data from Yazd utility are reported.
Abstract: Within the collaborative research center 666 a new
product development approach and the innovative manufacturing
method of linear flow splitting are being developed. So far the design process is supported by 3D-CAD models utilizing User Defined
Features in standard CAD-Systems. This paper now presents new
functions for generating 3D-models of integral sheet metal products with bifurcations using Siemens PLM NX 6. The emphasis is placed
on design and semi-automated insertion of User Defined Features.
Therefore User Defined Features for both, linear flow splitting
and its derivative linear bend splitting, were developed. In order to facilitate the modeling process, an application was developed
that guides through the insertion process. Its usability and dialog layout adapt known standard features. The work presented here has
significant implications on the quality, accurateness and efficiency of the product generation process of sheet metal products with higher
order bifurcations.
Abstract: The new technology of fuzzy neural networks for identification of parameters for mathematical models of geofields is proposed and checked. The effectiveness of that soft computing technology is demonstrated, especially in the early stage of modeling, when the information is uncertain and limited.
Abstract: This research investigates the suitability of fuel oil in
improving gypseous soil. A detailed laboratory tests were carried-out
on two soils (soil I with 51.6% gypsum content, and soil II with
26.55%), where the two soils were obtained from Al-Therthar site
(Al-Anbar Province-Iraq).
This study examines the improvement of soil properties using the
gypsum material which is locally available with low cost to minimize
the effect of moisture on these soils by using the fuel oil. This study
was conducted on two models of the soil gypsum, from the Tharthar
area. The first model was sandy soil with Gypsum content of (51.6%)
and the second is clayey soil and the content of Gypsum is (26.55%).
The program included tests measuring the permeability and
compressibility of the soil and their collapse properties. The shear
strength of the soil and the amounts of weight loss of fuel oil due to
drying had been found. These tests have been conducted on the
treated and untreated soils to observe the effect of soil treatment on
the engineering properties when mixed with varying degrees of fuel
oil with the equivalent of the water content.
The results showed that fuel oil is a good material to modify the
basic properties of the gypseous soil of collapsibility and
permeability, which are the main problems of this soil and retained
the soil by an appropriate amount of the cohesion suitable for
carrying the loads from the structure.
Abstract: In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Abstract: Long number multiplications (n ≥ 128-bit) are a
primitive in most cryptosystems. They can be performed better by
using Karatsuba-Ofman technique. This algorithm is easy to
parallelize on workstation network and on distributed memory, and
it-s known as the practical method of choice. Multiplying long
numbers using Karatsuba-Ofman algorithm is fast but is highly
recursive. In this paper, we propose different designs of
implementing Karatsuba-Ofman multiplier. A mixture of sequential
and combinational system design techniques involving pipelining is
applied to our proposed designs. Multiplying large numbers can be
adapted flexibly to time, area and power criteria. Computationally
and occupation constrained in embedded systems such as: smart
cards, mobile phones..., multiplication of finite field elements can be
achieved more efficiently. The proposed designs are compared to
other existing techniques. Mathematical models (Area (n), Delay (n))
of our proposed designs are also elaborated and evaluated on
different FPGAs devices.
Abstract: In the open space of decision support system the
mental impression of a manager-s decision has been the subject of
large importance than the ordinary famous one, when helped by
decision support system. Much of this study is an attempt to realize
the relation of decision support system usage and decision outcomes
that governs the system. For example, several researchers have
proposed so many different models to analyze the linkage between
decision support system processes and results of decision making.
This study draws the important relation of manager-s mental
approach with the use of decision support system. The findings of
this paper are theoretical attempts to provide Decision Support
System (DSS) in a way to exhibit and promote the learning in semi
structured area. The proposed model shows the points of one-s
learning improvements and maintains a theoretical approach in order
to explore the DSS contribution in enhancing the decision forming
and governing the system.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.
Abstract: Solutions are proposed for the central problem of estimating the reaction rate coefficients in homogeneous kinetics. The first is based upon the fact that the right hand side of a kinetic differential equation is linear in the rate constants, whereas the second one uses the technique of neural networks. This second one is discussed deeply and its advantages, disadvantages and conditions of applicability are analyzed in the mirror of the first one. Numerical analysis carried out on practical models using simulated data, and our programs written in Mathematica.
Abstract: There exists a strong correlation between efficient project management and competitive advantage for organizations. Therefore, organizations are striving to standardize and assess the rigor of their project management processes and capabilities i.e. project management maturity. Researchers and standardization organizations have developed several project management maturity models (PMMMs) to assess project management maturity of the organizations. This study presents a critical evaluation of some of the leading PMMMs against OPM3® in a multitude of ways to look at which PMMM is the most comprehensive model - which could assess most aspects of organizations and also help the organizations in gaining competitive advantage over competitors. After a detailed morphological analysis of the models, it is concluded that OPM3® is the most promising maturity model that can really provide a competitive advantage to the organizations due to its unique approach of assessment and improvement strategies.
Abstract: A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.
Abstract: Energy dissipation in drops has been investigated by
physical models. After determination of effective parameters on the
phenomenon, three drops with different heights have been
constructed from Plexiglas. They have been installed in two existing
flumes in the hydraulic laboratory. Several runs of physical models
have been undertaken to measured required parameters for
determination of the energy dissipation. Results showed that the
energy dissipation in drops depend on the drop height and discharge.
Predicted relative energy dissipations varied from 10.0% to 94.3%.
This work has also indicated that the energy loss at drop is mainly
due to the mixing of the jet with the pool behind the jet that causes
air bubble entrainment in the flow. Statistical model has been
developed to predict the energy dissipation in vertical drops denotes
nonlinear correlation between effective parameters. Further an
artificial neural networks (ANNs) approach was used in this paper to
develop an explicit procedure for calculating energy loss at drops
using NeuroSolutions. Trained network was able to predict the
response with R2 and RMSE 0.977 and 0.0085 respectively. The
performance of ANN was found effective when compared to
regression equations in predicting the energy loss.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: Colored Petri Nets (CPN) are very known kind of
high level Petri nets. With sound and complete semantics, rewriting
logic is one of very powerful logics in description and verification of
non-deterministic concurrent systems. Recently, CPN semantics are
defined in terms of rewriting logic, allowing us to built models by
formal reasoning. In this paper, we propose an automatic translation
of CPN to the rewriting logic language Maude. This tool allows
graphical editing and simulating CPN. The tool allows the user
drawing a CPN graphically and automatic translating the graphical
representation of the drawn CPN to Maude specification. Then,
Maude language is used to perform the simulation of the resulted
Maude specification. It is the first rewriting logic based environment
for this category of Petri Nets.