Abstract: Session Initiation Protocol (SIP) is a signaling layer protocol for building, adjusting and ending sessions among participants including Internet conferences, telephone calls and multimedia distribution. SIP facilitates user movement by proxying and forwarding requests to the present location of the user. In this paper, we provide a formal Specification and Description Language (SDL) and Message Sequence Chart (MSC) to model and define the Internet Engineering Task Force (IETF) SIP protocol and its sample services resulted from informal SIP specification. We create an “Abstract User Interface” using case analysis so that can be applied to identify SIP services more explicitly. The issued sample SIP features are then used as case scenarios; they are revised in MSCs format and validated to their corresponding SDL models.
Abstract: Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.
Abstract: Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.
Abstract: Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969.
Abstract: Reinforced concrete (RC) shear wall system of residential buildings is popular in South Korea. RC walls are subjected to axial forces in common and the effect of axial forces on the strength loss of the fire damaged walls has not been investigated. This paper aims at investigating temperature distribution on fire damaged concrete walls having different axial loads. In the experiments, a variable of specimens is axial force ratio. RC walls are fabricated with 150mm of wall thicknesses, 750mm of lengths and 1,300mm of heights having concrete strength of 24MPa. After curing, specimens are heated on one surface with ISO-834 standard time-temperature curve for 2 hours and temperature distributions during the test are measured using thermocouples inside the walls. The experimental results show that the temperature of the RC walls exposed to fire increases as axial force ratio increases. To verify the experiments, finite element (FE) models are generated for coupled temperature-structure analyses. The analytical results of thermal behaviors are in good agreement with the experimental results. The predicted displacement of the walls decreases when the axial force increases.
Abstract: Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).
Abstract: This study aims to establish function point process
based on stochastic distribution. In order to demonstrate effectiveness
of the study we present a case study that it applies suggested method
on an automotive electrical and electronics system software
development based on Monte Carlo Simulation. It is expected that the
result of this paper is used as guidance for establishing function point
process in organizations and tools for helping project managers make
decisions correctly.
Abstract: This study investigates how the site specific traffic
data differs from the Mechanistic Empirical Pavement Design
Software default values. Two Weigh-in-Motion (WIM) stations were
installed in Interstate-40 (I-40) and Interstate-25 (I-25) to developed
site specific data. A computer program named WIM Data Analysis
Software (WIMDAS) was developed using Microsoft C-Sharp (.Net)
for quality checking and processing of raw WIM data. A complete
year data from November 2013 to October 2014 was analyzed using
the developed WIM Data Analysis Program. After that, the vehicle
class distribution, directional distribution, lane distribution, monthly
adjustment factor, hourly distribution, axle load spectra, average
number of axle per vehicle, axle spacing, lateral wander distribution,
and wheelbase distribution were calculated. Then a comparative
study was done between measured data and AASHTOWare default
values. It was found that the measured general traffic inputs for I-40
and I-25 significantly differ from the default values.
Abstract: For the last decade, researchers have started to focus
their interest on Multicast Group Key Management Framework. The
central research challenge is secure and efficient group key
distribution. The present paper is based on the Bit model based
Secure Multicast Group key distribution scheme using the most
popular absolute encoder output type code named Gray Code. The
focus is of two folds. The first fold deals with the reduction of
computation complexity which is achieved in our scheme by
performing fewer multiplication operations during the key updating
process. To optimize the number of multiplication operations, an
O(1) time algorithm to multiply two N-bit binary numbers which
could be used in an N x N bit-model of reconfigurable mesh is used
in this proposed work. The second fold aims at reducing the amount
of information stored in the Group Center and group members while
performing the update operation in the key content. Comparative
analysis to illustrate the performance of various key distribution
schemes is shown in this paper and it has been observed that this
proposed algorithm reduces the computation and storage complexity
significantly. Our proposed algorithm is suitable for high
performance computing environment.
Abstract: Liver segmentation from medical images poses more
challenges than analogous segmentations of other organs. This
contribution introduces a liver segmentation method from a series of
computer tomography images. Overall, we present a novel method for
segmenting liver by coupling density matching with shape priors.
Density matching signifies a tracking method which operates via
maximizing the Bhattacharyya similarity measure between the
photometric distribution from an estimated image region and a model
photometric distribution. Density matching controls the direction of
the evolution process and slows down the evolving contour in regions
with weak edges. The shape prior improves the robustness of density
matching and discourages the evolving contour from exceeding liver’s
boundaries at regions with weak boundaries. The model is
implemented using a modified distance regularized level set (DRLS)
model. The experimental results show that the method achieves a
satisfactory result. By comparing with the original DRLS model, it is
evident that the proposed model herein is more effective in addressing
the over segmentation problem. Finally, we gauge our performance of
our model against matrices comprising of accuracy, sensitivity, and
specificity.
Abstract: A Multi-dimensional computational fluid dynamics
(CFD) two-phase model was developed with the aim to simulate
the in-core coolant circuit of a pressurized heavy water reactor
(PHWR) of a commercial nuclear power plant (NPP). Due to the
fact that this PHWR is a Reactor Pressure Vessel type (RPV),
three-dimensional (3D) detailed modelling of the large reservoirs of
the RPV (the upper and lower plenums and the downcomer) were
coupled with an in-house finite volume one-dimensional (1D) code
in order to model the 451 coolant channels housing the nuclear fuel.
Regarding the 1D code, suitable empirical correlations for taking into
account the in-channel distributed (friction losses) and concentrated
(spacer grids, inlet and outlet throttles) pressure losses were used.
A local power distribution at each one of the coolant channels
was also taken into account. The heat transfer between the coolant
and the surrounding moderator was accurately calculated using a
two-dimensional theoretical model. The implementation of subcooled
boiling and condensation models in the 1D code along with the use
of functions for representing the thermal and dynamic properties of
the coolant and moderator (heavy water) allow to have estimations
of the in-core steam generation under nominal flow conditions for a
generic fission power distribution. The in-core mass flow distribution
results for steady state nominal conditions are in agreement with the
expected from design, thus getting a first assessment of the coupled
1/3D model. Results for nominal condition were compared with
those obtained with a previous 1/3D single-phase model getting more
realistic temperature patterns, also allowing visualize low values of
void fraction inside the upper plenum. It must be mentioned that the
current results were obtained by imposing prescribed fission power
functions from literature. Therefore, results are showed with the aim
of point out the potentiality of the developed model.
Abstract: Radiative heat transfer in participating medium was
carried out using the finite volume method. The radiative transfer
equations are formulated for absorbing and anisotropically scattering
and emitting medium. The solution strategy is discussed and the
conditions for computational stability are conferred. The equations
have been solved for transient radiative medium and transient
radiation incorporated with transient conduction. Results have been
obtained for irradiation and corresponding heat fluxes for both the
cases. The solutions can be used to conclude incident energy and
surface heat flux. Transient solutions were obtained for a slab of heat
conducting in slab and by thermal radiation. The effect of heat
conduction during the transient phase is to partially equalize the
internal temperature distribution. The solution procedure provides
accurate temperature distributions in these regions. A finite volume
procedure with variable space and time increments is used to solve
the transient radiation equation. The medium in the enclosure
absorbs, emits, and anisotropically scatters radiative energy. The
incident radiations and the radiative heat fluxes are presented in
graphical forms. The phase function anisotropy plays a significant
role in the radiation heat transfer when the boundary condition is
non-symmetric.
Abstract: In recent decades, probabilistic constrained optimal
control problems have attracted much attention in many research
fields. Although probabilistic constraints are generally intractable
in an optimization problem, several tractable methods haven been
proposed to handle probabilistic constraints. In most methods,
probabilistic constraints are reduced to deterministic constraints
that are tractable in an optimization problem. However, there is a
gap between the transformed deterministic constraints in case of
known and unknown probability distribution. This paper examines
the conservativeness of probabilistic constrained optimization method
for unknown probability distribution. The objective of this paper is
to provide a quantitative assessment of the conservatism for tractable
constraints in probabilistic constrained optimization with unknown
probability distribution.
Abstract: Ceramic Waste Aggregates (CWAs) were made from
electric porcelain insulator wastes supplied from an electric power
company, which were crushed and ground to fine aggregate sizes. In
this study, to develop the CWA mortar as an eco–efficient, ground
granulated blast–furnace slag (GGBS) as a Supplementary
Cementitious Material (SCM) was incorporated. The water–to–binder
ratio (W/B) of the CWA mortars was varied at 0.4, 0.5, and 0.6. The
cement of the CWA mortar was replaced by GGBS at 20 and 40% by
volume (at about 18 and 37% by weight). Mechanical properties of
compressive and splitting tensile strengths, and elastic modulus were
evaluated at the age of 7, 28, and 91 days. Moreover, the chloride
ingress test was carried out on the CWA mortars in a 5.0% NaCl
solution for 48 weeks. The chloride diffusion was assessed by using an
electron probe microanalysis (EPMA). To consider the relation of the
apparent chloride diffusion coefficient and the pore size, the pore size
distribution test was also performed using a mercury intrusion
porosimetry at the same time with the EPMA. The compressive
strength of the CWA mortars with the GGBS was higher than that
without the GGBS at the age of 28 and 91 days. The resistance to the
chloride ingress of the CWA mortar was effective in proportion to the
GGBS replacement level.
Abstract: It is necessary to predict a fatigue crack propagation
life for estimation of structural integrity. Because of an uncertainty
and a randomness of a structural behavior, it is also required to
analyze stochastic characteristics of the fatigue crack propagation life
at a specified fatigue crack size. The essential purpose of this study is to find the effect of load ratio
on probability distribution of the fatigue crack propagation life at a
specified grown crack size and to confirm the good probability
distribution in magnesium alloys under various fatigue load ratio
conditions. To investigate a stochastic crack growth behavior, fatigue
crack propagation experiments are performed in laboratory air under
several conditions of fatigue load ratio using AZ31. By Anderson-Darling test, a goodness-of-fit test for probability
distribution of the fatigue crack propagation life is performed. The
effect of load ratio on variability of fatigue crack propagation life is
also investigated.
Abstract: This study suggests the estimation method of stress
distribution for the beam structures based on TLS (Terrestrial Laser
Scanning). The main components of method are the creation of the
lattices of raw data from TLS to satisfy the suitable condition and
application of CSSI (Cubic Smoothing Spline Interpolation) for
estimating stress distribution. Estimation of stress distribution for the
structural member or the whole structure is one of the important
factors for safety evaluation of the structure. Existing sensors which
include ESG (Electric strain gauge) and LVDT (Linear Variable
Differential Transformer) can be categorized as contact type sensor
which should be installed on the structural members and also there are
various limitations such as the need of separate space where the
network cables are installed and the difficulty of access for sensor
installation in real buildings. To overcome these problems inherent in
the contact type sensors, TLS system of LiDAR (light detection and
ranging), which can measure the displacement of a target in a long
range without the influence of surrounding environment and also get
the whole shape of the structure, has been applied to the field of
structural health monitoring. The important characteristic of TLS
measuring is a formation of point clouds which has many points
including the local coordinate. Point clouds are not linear distribution
but dispersed shape. Thus, to analyze point clouds, the interpolation is
needed vitally. Through formation of averaged lattices and CSSI for
the raw data, the method which can estimate the displacement of
simple beam was developed. Also, the developed method can be
extended to calculate the strain and finally applicable to estimate a
stress distribution of a structural member. To verify the validity of the
method, the loading test on a simple beam was conducted and TLS
measured it. Through a comparison of the estimated stress and
reference stress, the validity of the method is confirmed.
Abstract: Neural activity in the human brain starts from the
early stages of prenatal development. This activity or signals
generated by the brain are electrical in nature and represent not only
the brain function but also the status of the whole body. At the
present moment, three methods can record functional and
physiological changes within the brain with high temporal resolution
of neuronal interactions at the network level: the
electroencephalogram (EEG), the magnet oencephalogram (MEG),
and functional magnetic resonance imaging (fMRI); each of these has
advantages and shortcomings. EEG recording with a large number of
electrodes is now feasible in clinical practice. Multichannel EEG
recorded from the scalp surface provides very valuable but indirect
information about the source distribution. However, deep electrode
measurements yield more reliable information about the source
locations intracranial recordings and scalp EEG are used with the
source imaging techniques to determine the locations and strengths of
the epileptic activity. As a source localization method, Low
Resolution Electro-Magnetic Tomography (LORETA) is solved for
the realistic geometry based on both forward methods, the Boundary
Element Method (BEM) and the Finite Difference Method (FDM). In
this paper, we review the findings EEG- LORETA about epilepsy.
Abstract: Elastomeric polymer foam has been used widely in
the automotive industry, especially for isolating unwanted vibrations.
Such material is able to absorb unwanted vibration due to its
combination of elastic and viscous properties. However, the ‘creep
effect’, poor stress distribution and susceptibility to high
temperatures are the main disadvantages of such a system.
In this study, improvements in the performance of elastomeric
foam as a vibration isolator were investigated using the concept of
Foam Filled Fluid (FFFluid). In FFFluid devices, the foam takes the
form of capsule shapes, and is mixed with viscous fluid, while the
mixture is contained in a closed vessel. When the FFFluid isolator is
affected by vibrations, energy is absorbed, due to the elastic strain of
the foam. As the foam is compressed, there is also movement of the
fluid, which contributes to further energy absorption as the fluid
shears. Also, and dependent on the design adopted, the packaging
could also attenuate vibration through energy absorption via friction
and/or elastic strain.
The present study focuses on the advantages of the FFFluid
concept over the dry polymeric foam in the role of vibration isolation.
This comparative study between the performance of dry foam and the
FFFluid was made according to experimental procedures. The paper
concludes by evaluating the performance of the FFFluid isolator in
the suspension system of a light vehicle. One outcome of this
research is that the FFFluid may preferable over elastomer isolators
in certain applications, as it enables a reduction in the effects of high
temperatures and of ‘creep effects’, thereby increasing the reliability
and load distribution. The stiffness coefficient of the system has
increased about 60% by using an FFFluid sample. The technology
represented by the FFFluid is therefore considered by this research
suitable for application in the suspension system of a light vehicle.
Abstract: We present a gas-liquid microfluidic system as a
reactor to obtain magnetite nanoparticles with an excellent degree of
control regarding their crystalline phase, shape and size. Several
types of microflow approaches were selected to prevent nanomaterial
aggregation and to promote homogenous size distribution. The
selected reactor consists of a mixer stage aided by ultrasound waves
and a reaction stage using a N2-liquid segmented flow to prevent
magnetite oxidation to non-magnetic phases. A milli-fluidic reactor
was developed to increase the production rate where a magnetite
throughput close to 450 mg/h in a continuous fashion was obtained.
Abstract: One of the crucial parameters of digital cryptographic
systems is the selection of the keys used and their distribution. The
randomness of the keys has a strong impact on the system’s security
strength being difficult to be predicted, guessed, reproduced, or
discovered by a cryptanalyst. Therefore, adequate key randomness
generation is still sought for the benefit of stronger cryptosystems.
This paper suggests an algorithm designed to generate and test
pseudo random number sequences intended for cryptographic
applications. This algorithm is based on mathematically manipulating
a publically agreed upon information between sender and receiver
over a public channel. This information is used as a seed for
performing some mathematical functions in order to generate a
sequence of pseudorandom numbers that will be used for
encryption/decryption purposes. This manipulation involves
permutations and substitutions that fulfill Shannon’s principle of
“confusion and diffusion”. ASCII code characters were utilized in the
generation process instead of using bit strings initially, which adds
more flexibility in testing different seed values. Finally, the obtained
results would indicate sound difficulty of guessing keys by attackers.