Abstract: Signalized intersections on high-volume arterials are
often congested during peak hours, causing a decrease in through
movement efficiency on the arterial. Much of the vehicle delay
incurred at conventional intersections is caused by high left-turn
demand. Unconventional intersection designs attempt to reduce
intersection delay and travel time by rerouting left-turns away from
the main intersection and replacing it with right-turn followed by Uturn.
The proposed new type of U-turn intersection is geometrically
designed with a raised island which provides a protected U-turn
movement. In this study several scenarios based on different
distances between U-turn and main intersection, traffic volume of
major/minor approaches and percentage of left-turn volumes were
simulated by use of AIMSUN, a type of traffic microsimulation
software. Subsequently some models are proposed in order to
compute travel time of each movement. Eventually by correlating
these equations to some in-field collected data of some implemented
U-turn facilities, the reliability of the proposed models are approved.
With these models it would be possible to calculate travel time of
each movement under any kind of geometric and traffic condition. By
comparing travel time of a conventional signalized intersection with
U-turn intersection travel time, it would be possible to decide on
converting signalized intersections into this new kind of U-turn
facility or not. However comparison of travel time is not part of the
scope of this research. In this paper only travel time of this innovative
U-turn facility would be predicted. According to some before and
after study about the traffic performance of some executed U-turn
facilities, it is found that commonly, this new type of U-turn facility
produces lower travel time. Thus, evaluation of using this type of
unconventional intersection should be seriously considered.
Abstract: Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.
Abstract: Dichotomization of the outcome by a single cut-off point is an important part of various medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit regression or logistic regression. However, in many real-life situations, a certain cut-off point dividing the outcome into two groups is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we show that neither the beta in the linear regression model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain medical categories, we propose to use the linear regression model with the fuzzy membership function as a dependent variable. This fuzzy membership function denotes to what degree the value of the underlying (continuous) outcome falls below or above the dichotomization cut-off point. In the paper, we demonstrate that the linear regression model of the fuzzy dependent variable can be insensitive against the uncertainty in the cut-off point location. In the paper we present the modeling results from the real study of low hemoglobin levels in infants. We systematically test the robustness of the binomial regression model and the linear regression model with the fuzzy dependent variable by changing the boundary for the category Anemia and show that the behavior of the latter model persists over a quite wide interval.
Abstract: In this paper after reviewing some previous studies, in
order to optimize the above knee prosthesis, beside the inertial
properties a new controlling parameter is informed. This controlling
parameter makes the prosthesis able to act as a multi behavior system
when the amputee is opposing to different environments. This active
prosthesis with the new controlling parameter can simplify the
control of prosthesis and reduce the rate of energy consumption in
comparison to recently presented similar prosthesis “Agonistantagonist
active knee prosthesis".
In this paper three models are generated, a passive, an active, and
an optimized active prosthesis. Second order Taylor series is the
numerical method in solution of the models equations and the
optimization procedure is genetic algorithm.
Modeling the prosthesis which comprises this new controlling
parameter (SEP) during the swing phase represents acceptable results
in comparison to natural behavior of shank. Reported results in this
paper represent 3.3 degrees as the maximum deviation of models
shank angle from the natural pattern. The natural gait pattern belongs
to walking at the speed of 81 m/min.
Abstract: There is wide range of scientific workflow systems
today, each one designed to resolve problems at a specific level. In
large collaborative projects, it is often necessary to recognize the
heterogeneous workflow systems already in use by various partners
and any potential collaboration between these systems requires
workflow interoperability. Publish/Subscribe Scientific Workflow
Interoperability Framework (PS-SWIF) approach was proposed to
achieve workflow interoperability among workflow systems. This
paper evaluates the PS-SWIF approach and its system to achieve
workflow interoperability using Web Services with asynchronous
notification messages represented by WS-Eventing standard. This
experiment covers different types of communication models provided
by Workflow Management Coalition (WfMC). These models are:
Chained processes, Nested synchronous sub-processes, Event
synchronous sub-processes, and Nested sub-processes
(Polling/Deferred Synchronous). Also, this experiment shows the
flexibility and simplicity of the PS-SWIF approach when applied to a
variety of workflow systems (Triana, Taverna, Kepler) in local and
remote environments.
Abstract: At a time of growing market turbulence and a strong
shifts towards increasingly complex risk models and more stringent audit requirements, it is more critical than ever to maintain the highest quality of financial and credit information. IFC implemented
an approach that helps increase data integrity and quality significantly. This approach is called “Screening". Screening is based on linking information from different sources to identify potential
inconsistencies in key financial and credit data. That, in turn, can help
to ease the trials of portfolio supervision, and improve overall company global reporting and assessment systems. IFC experience
showed that when used regularly, Screening led to improved information.
Abstract: Real world Speaker Identification (SI) application
differs from ideal or laboratory conditions causing perturbations that
leads to a mismatch between the training and testing environment
and degrade the performance drastically. Many strategies have been
adopted to cope with acoustical degradation; wavelet based Bayesian
marginal model is one of them. But Bayesian marginal models
cannot model the inter-scale statistical dependencies of different
wavelet scales. Simple nonlinear estimators for wavelet based
denoising assume that the wavelet coefficients in different scales are
independent in nature. However wavelet coefficients have significant
inter-scale dependency. This paper enhances this inter-scale
dependency property by a Circularly Symmetric Probability Density
Function (CS-PDF) related to the family of Spherically Invariant
Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain
and corresponding joint shrinkage estimator is derived by Maximum
a Posteriori (MAP) estimator. A framework is proposed based on
these to denoise speech signal for automatic speaker identification
problems. The robustness of the proposed framework is tested for
Text Independent Speaker Identification application on 100 speakers
of POLYCOST and 100 speakers of YOHO speech database in three
different noise environments. Experimental results show that the
proposed estimator yields a higher improvement in identification
accuracy compared to other estimators on popular Gaussian Mixture
Model (GMM) based speaker model and Mel-Frequency Cepstral
Coefficient (MFCC) features.
Abstract: A multiphase harmonic load flow algorithm is developed based on backward/forward sweep to examine the effects of various factors on the neutral to earth voltage (NEV), including unsymmetrical system configuration, load unbalance and harmonic injection. The proposed algorithm composes fundamental frequency and harmonic frequencies power flows. The algorithm and the associated models are tested on IEEE 13 bus system. The magnitude of NEV is investigated under various conditions of the number of grounding rods per feeder lengths, the grounding rods resistance and the grounding resistance of the in feeding source. Additionally, the harmonic injection of nonlinear loads has been considered and its influences on NEV under different conditions are shown.
Abstract: Data Envelopment Analysis (DEA) is one of the most
widely used technique for evaluating the relative efficiency of a set
of homogeneous decision making units. Traditionally, it assumes that
input and output variables are known in advance, ignoring the critical
issue of data uncertainty. In this paper, we deal with the problem
of efficiency evaluation under uncertain conditions by adopting the
general framework of the stochastic programming. We assume that
output parameters are represented by discretely distributed random
variables and we propose two different models defined according to a
neutral and risk-averse perspective. The models have been validated
by considering a real case study concerning the evaluation of the
technical efficiency of a sample of individual firms operating in
the Italian leather manufacturing industry. Our findings show the
validity of the proposed approach as ex-ante evaluation technique
by providing the decision maker with useful insights depending on
his risk aversion degree.
Abstract: The paper focuses on the area of context modeling with respect to the specification of context-aware systems supporting ubiquitous applications. The proposed approach, followed within the SIMPLICITY IST project, uses a high-level system ontology to derive context models for system components which consequently are mapped to the system's physical entities. For the definition of user and device-related context models in particular, the paper suggests a standard-based process consisting of an analysis phase using the Common Information Model (CIM) methodology followed by an implementation phase that defines 3GPP based components. The benefits of this approach are further depicted by preliminary examples of XML grammars defining profiles and components, component instances, coupled with descriptions of respective ubiquitous applications.
Abstract: In many applications, it is a priori known that the
target function should satisfy certain constraints imposed by, for
example, economic theory or a human-decision maker. Here we
consider partially monotone problems, where the target variable
depends monotonically on some of the predictor variables but not all.
We propose an approach to build partially monotone models based
on the convolution of monotone neural networks and kernel
functions. The results from simulations and a real case study on
house pricing show that our approach has significantly better
performance than partially monotone linear models. Furthermore, the
incorporation of partial monotonicity constraints not only leads to
models that are in accordance with the decision maker's expertise,
but also reduces considerably the model variance in comparison to
standard neural networks with weight decay.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: Bond graph models of an electrical transformer including
the nonlinear saturation are presented. The transformer
using electrical and magnetic circuits are modelled. These models
determine the relation between self and mutual inductances, and
the leakage and magnetizing inductances of power transformers
with two windings using the properties of a bond graph. The
equivalence between electrical and magnetic variables is given.
The modelling and analysis using this methodology to three phase
power transformers can be extended.
Abstract: This paper presents a threshold voltage model of pocket implanted sub-100 nm n-MOSFETs incorporating the drain and substrate bias effects using two linear pocket profiles. Two linear equations are used to simulate the pocket profiles along the channel at the surface from the source and drain edges towards the center of the n-MOSFET. Then the effective doping concentration is derived and is used in the threshold voltage equation that is obtained by solving the Poisson-s equation in the depletion region at the surface. Simulated threshold voltages for various gate lengths fit well with the experimental data already published in the literature. The simulated result is compared with the two other pocket profiles used to derive the threshold voltage models of n-MOSFETs. The comparison shows that the linear model has a simple compact form that can be utilized to study and characterize the pocket implanted advanced ULSI devices.
Abstract: Rainfall data at fine resolution and knowledge of its
characteristics plays a major role in the efficient design and operation
of agricultural, telecommunication, runoff and erosion control as well
as water quality control systems. The paper is aimed to study the
statistical distribution of hourly rainfall depth for 12 representative
stations spread across Peninsular Malaysia. Hourly rainfall data of 10
to 22 years period were collected and its statistical characteristics
were estimated. Three probability distributions namely, Generalized
Pareto, Exponential and Gamma distributions were proposed to
model the hourly rainfall depth, and three goodness-of-fit tests,
namely, Kolmogorov-Sminov, Anderson-Darling and Chi-Squared
tests were used to evaluate their fitness. Result indicates that the east
cost of the Peninsular receives higher depth of rainfall as compared
to west coast. However, the rainfall frequency is found to be
irregular. Also result from the goodness-of-fit tests show that all the
three models fit the rainfall data at 1% level of significance.
However, Generalized Pareto fits better than Exponential and
Gamma distributions and is therefore recommended as the best fit.
Abstract: The large and small-scale shaking table tests, which
was conducted for investigating damage evolution of piles inside
liquefied soil, are numerically simulated and experimental verified by the3D nonlinear finite element analysis. Damage evolution of
elasto-plastic circular steel piles and reinforced concrete (RC) one with cracking and yield of reinforcement are focused on, and the failure patterns and residual damages are captured by the proposed constitutive models. The superstructure excitation behind quay wall is
reproduced as well.
Abstract: Without uncertainty by applying external loads on
beams, bending is created. The created bending in I-beams, puts one
of the flanges in tension and the other one in compression. With increasing of bending, compression flange buckled and beam in out
of its plane direction twisted, this twisting well-known as Lateral Torsional Buckling. Providing bending moment varieties along the
beam, the critical moment is greater than the case its under pure bending. In other words, the value of bending gradient coefficient is
always greater than unite. In this article by the use of " ANSYS 10.0" software near 80 3-D finite element models developed for the
propose of analyzing beams` lateral torsional buckling and surveying influence of slenderness on beams' bending gradient coefficient.
Results show that, presented Cb coefficient via AISC is not correct for some of beams and value of this coefficient is smaller than what proposed by AISC. Therefore instead of using a constant Cb for each
case of loading , a function with two criterion for calculation of Cb coefficient for some cases is proposed.
Abstract: We present a new numerical method for the computation of the steady-state solution of Markov chains. Theoretical analyses show that the proposed method, with a contraction factor α, converges to the one-dimensional null space of singular linear systems of the form Ax = 0. Numerical experiments are used to illustrate the effectiveness of the proposed method, with applications to a class of interesting models in the domain of tandem queueing networks.
Abstract: It has become crucial over the years for nations to
improve their credit scoring methods and techniques in light of the
increasing volatility of the global economy. Statistical methods or
tools have been the favoured means for this; however artificial
intelligence or soft computing based techniques are becoming
increasingly preferred due to their proficient and precise nature and
relative simplicity. This work presents a comparison between Support
Vector Machines and Artificial Neural Networks two popular soft
computing models when applied to credit scoring. Amidst the
different criteria-s that can be used for comparisons; accuracy,
computational complexity and processing times are the selected
criteria used to evaluate both models. Furthermore the German credit
scoring dataset which is a real world dataset is used to train and test
both developed models. Experimental results obtained from our study
suggest that although both soft computing models could be used with
a high degree of accuracy, Artificial Neural Networks deliver better
results than Support Vector Machines.
Abstract: Various formal and informal brand alliances are being formed in professional service firms. Professional service corporate brand is heavily dependent on brands of professional employees who comprise them, and professional employee brands are in turn dependent on the corporate brand. Prior work provides limited scientific evidence of brand alliance effects in professional service area – i.e., how professional service corporate-employee brand allies are affected by an alliance, what are brand attitude effects after alliance formation and how these effects vary with different strengths of an ally. Scientific literature analysis and theoretical modeling are the main methods of the current study. As a result, a theoretical model is constructed for estimating spillover effects of professional service corporate-employee brand alliances and for comparison among different professional service firm expertise practice models – from “brains" to “procedure" model. The resulting theoretical model lays basis for future experimental studies.