Abstract: In the study of honeycomb crushing under quasistatic loading, two parameters are important, the mean crushing stress and the wavelength of the folding mode. The previous theoretical models did not consider the true cylindrical curvature effects and the flow stress in the folding mode of honeycomb material. The present paper introduces a modification on Wierzbicki-s model based on considering two above mentioned parameters in estimating the mean crushing stress and the wavelength through implementation of the energy method. Comparison of the results obtained by the new model and Wierzbicki-s model with existing experimental data shows better prediction by the model presented in this paper.
Abstract: Simulation of the flow and sedimentation process in
the reservoir dams can be made by two methods of physical and mathematical modeling. The study area was within a region which
ranged from the Jelogir hydrometric station to the Karkheh reservoir
dam aimed at investigating the effects of stream tubes on the
GSTARS-3 model behavior. The methodologies was to run the model based on 5 stream tubes in order to observe the influence of
each scenario on longitudinal profiles, cross-section, flow velocity and bed load sediment size. Results further suggest that the use of
two stream tubes or more which result in the semi-two-dimensional
model will yield relatively closer results to the observational data
than a singular stream tube modeling. Moreover, the results of
modeling with three stream tubes shown to yield a relatively close
results with the observational data. The overall conclusion of the paper is with applying various stream tubes; it would be possible to yield a significant influence on the modeling behavior Vis-a Vis the bed load sediment size.
Abstract: According to the interaction of inflation and
unemployment, expectation of the rate of inflation in Croatia is
estimated. The interaction between inflation and unemployment is
shown by model based on three first-order differential i.e. difference
equations: Phillips relation, adaptive expectations equation and
monetary-policy equation. The resulting equation is second order
differential i.e. difference equation which describes the time path of
inflation. The data of the rate of inflation and the rate of
unemployment are used for parameters estimation. On the basis of
the estimated time paths, the stability and convergence analysis is
done for the rate of inflation.
Abstract: Modeling product configurations needs large amounts of knowledge about technical and marketing restrictions on the product. Previous attempts to automate product configurations concentrate on representations and management of the knowledge for specific domains in fixed and isolated computing environments. Since the knowledge about product configurations is subject to continuous change and hard to express, these attempts often failed to efficiently manage and exchange the knowledge in collaborative product development. In this paper, XML Topic Map (XTM) is introduced to represent and exchange the knowledge about product configurations in collaborative product development. A product configuration model based on XTM along with its merger and inference facilities enables configuration engineers in collaborative product development to manage and exchange their knowledge efficiently. A prototype implementation is also presented to demonstrate the proposed model can be applied to engineering information systems to exchange the product configuration knowledge.
Abstract: This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).
Abstract: Location-aware computing is a type of pervasive
computing that utilizes user-s location as a dominant factor for
providing urban services and application-related usages. One of the
important urban services is navigation instruction for wayfinders in a
city especially when the user is a tourist. The services which are
presented to the tourists should provide adapted location aware
instructions. In order to achieve this goal, the main challenge is to
find spatial relevant objects and location-dependent information. The
aim of this paper is the development of a reusable location-aware
model to handle spatial relevancy parameters in urban location-aware
systems. In this way we utilized ontology as an approach which could
manage spatial relevancy by defining a generic model. Our
contribution is the introduction of an ontological model based on the
directed interval algebra principles. Indeed, it is assumed that the
basic elements of our ontology are the spatial intervals for the user
and his/her related contexts. The relationships between them would
model the spatial relevancy parameters. The implementation language
for the model is OWLs, a web ontology language. The achieved
results show that our proposed location-aware model and the
application adaptation strategies provide appropriate services for the
user.
Abstract: ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.
Abstract: The objective of this paper is to a design of pattern
classification model based on the back-propagation (BP) algorithm for
decision support system. Standard BP model has done full connection
of each node in the layers from input to output layers. Therefore, it
takes a lot of computing time and iteration computing for good
performance and less accepted error rate when we are doing some
pattern generation or training the network.
However, this model is using exclusive connection in between
hidden layer nodes and output nodes. The advantage of this model is
less number of iteration and better performance compare with standard
back-propagation model. We simulated some cases of classification
data and different setting of network factors (e.g. hidden layer number
and nodes, number of classification and iteration). During our
simulation, we found that most of simulations cases were satisfied by
BP based using exclusive connection network model compared to
standard BP. We expect that this algorithm can be available to
identification of user face, analysis of data, mapping data in between
environment data and information.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: The UML modeling of complex distributed systems often is a great challenge due to the large amount of parallel real-time operating components. In this paper the problems of verification of such systems are discussed. ECPN, an Extended Colored Petri Net is defined to formally describe state transitions of components and interactions among components. The relationship between sequence diagrams and Free Choice Petri Nets is investigated. Free Choice Petri Net theory helps verifying the liveness of sequence diagrams. By converting sequence diagrams to ECPNs and then comparing behaviors of sequence diagram ECPNs and statecharts, the consistency among models is analyzed. Finally, a verification process for an example model is demonstrated.
Abstract: Modeling of a manufacturing system enables one to
identify the effects of key design parameters on the system performance and as a result to make correct decision. This paper
proposes a manufacturing system modeling approach using a spreadsheet model based on queuing network theory, in which a
static capacity planning model and stochastic queuing model are integrated. The model was used to improve the existing system utilization in relation to product design. The model incorporates few
parameters such as utilization, cycle time, throughput, and batch size.
The study also showed that the validity of developed model is good enough to apply and the maximum value of relative error is 10%, far
below the limit value 32%. Therefore, the model developed in this
study is a valuable alternative model in evaluating a manufacturing system
Abstract: This paper provides a framework in order to
incorporate reliability issue as a sign of disruption in distribution
systems and partial covering theory as a response to limitation in
coverage radios and economical preferences, simultaneously into the
traditional literatures of capacitated facility location problems. As a
result we develop a bi-objective model based on the discrete
scenarios for expected cost minimization and demands coverage
maximization through a three echelon supply chain network by
facilitating multi-capacity levels for provider side layers and
imposing gradual coverage function for distribution centers (DCs).
Additionally, in spite of objectives aggregation for solving the model
through LINGO software, a branch of LP-Metric method called Min-
Max approach is proposed and different aspects of corresponds
model will be explored.
Abstract: In competitive electricity markets all over the world, an adoption of suitable transmission pricing model is a problem as transmission segment still operates as a monopoly. Transmission pricing is an important tool to promote investment for various transmission services in order to provide economic, secure and reliable electricity to bulk and retail customers. The nodal pricing based on SRMC (Short Run Marginal Cost) is found extremely useful by researchers for sending correct economic signals. The marginal prices must be determined as a part of solution to optimization problem i.e. to maximize the social welfare. The need to maximize the social welfare subject to number of system operational constraints is a major challenge from computation and societal point of views. The purpose of this paper is to present a nodal transmission pricing model based on SRMC by developing new mathematical expressions of real and reactive power marginal prices using GA-Fuzzy based optimal power flow framework. The impacts of selecting different social welfare functions on power marginal prices are analyzed and verified with results reported in literature. Network revenues for two different power systems are determined using expressions derived for real and reactive power marginal prices in this paper.
Abstract: Cognitive models allow predicting some aspects of utility
and usability of human machine interfaces (HMI), and simulating
the interaction with these interfaces. The action of predicting is based
on a task analysis, which investigates what a user is required to do
in terms of actions and cognitive processes to achieve a task. Task
analysis facilitates the understanding of the system-s functionalities.
Cognitive models are part of the analytical approaches, that do not
associate the users during the development process of the interface.
This article presents a study about the evaluation of a human
machine interaction with a contextual assistant-s interface using ACTR
and GOMS cognitive models. The present work shows how these
techniques may be applied in the evaluation of HMI, design and
research by emphasizing firstly the task analysis and secondly the
time execution of the task. In order to validate and support our
results, an experimental study of user performance is conducted at
the DOMUS laboratory, during the interaction with the contextual
assistant-s interface. The results of our models show that the GOMS
and ACT-R models give good and excellent predictions respectively
of users performance at the task level, as well as the object level.
Therefore, the simulated results are very close to the results obtained
in the experimental study.
Abstract: This work presents a multiple objective linear programming (MOLP) model based on the desirability function approach for solving the aggregate production planning (APP) decision problem upon Masud and Hwang-s model. The proposed model minimises total production costs, carrying or backordering costs and rates of change in labor levels. An industrial case demonstrates the feasibility of applying the proposed model to the APP problems with three scenarios of inventory levels. The proposed model yields an efficient compromise solution and the overall levels of DM satisfaction with the multiple combined response levels. There has been a trend to solve complex planning problems using various metaheuristics. Therefore, in this paper, the multi-objective APP problem is solved by hybrid metaheuristics of the hunting search (HuSIHSA) and firefly (FAIHSA) mechanisms on the improved harmony search algorithm. Results obtained from the solution of are then compared. It is observed that the FAIHSA can be used as a successful alternative solution mechanism for solving APP problems over three scenarios. Furthermore, the FAIHSA provides a systematic framework for facilitating the decision-making process, enabling a decision maker interactively to modify the desirability function approach and related model parameters until a good optimal solution is obtained with proper selection of control parameters when compared.
Abstract: The study of the transport coefficients in electronic
devices is currently carried out by analytical and empirical models.
This study requires several simplifying assumptions, generally
necessary to lead to analytical expressions in order to study the
different characteristics of the electronic silicon-based devices.
Further progress in the development, design and optimization of
Silicon-based devices necessarily requires new theory and modeling
tools. In our study, we use the PSO (Particle Swarm Optimization)
technique as a computational tool to develop analytical approaches in
order to study the transport phenomenon of the electron in crystalline
silicon as function of temperature and doping concentration. Good
agreement between our results and measured data has been found.
The optimized analytical models can also be incorporated into the
circuits simulators to study Si-based devices without impact on the
computational time and data storage.
Abstract: In this paper we present a novel approach for human
Body configuration based on the Silhouette. We propose to address
this problem under the Bayesian framework. We use an effective
Model based MCMC (Markov Chain Monte Carlo) method to solve
the configuration problem, in which the best configuration could be
defined as MAP (maximize a posteriori probability) in Bayesian
model. This model based MCMC utilizes the human body model to
drive the MCMC sampling from the solution space. It converses the
original high dimension space into a restricted sub-space constructed
by the human model and uses a hybrid sampling algorithm. We
choose an explicit human model and carefully select the likelihood
functions to represent the best configuration solution. The
experiments show that this method could get an accurate
configuration and timesaving for different human from multi-views.
Abstract: This paper presents an intrusion detection system of hybrid neural network model based on RBF and Elman. It is used for anomaly detection and misuse detection. This model has the memory function .It can detect discrete and related aggressive behavior effectively. RBF network is a real-time pattern classifier, and Elman network achieves the memory ability for former event. Based on the hybrid model intrusion detection system uses DARPA data set to do test evaluation. It uses ROC curve to display the test result intuitively. After the experiment it proves this hybrid model intrusion detection system can effectively improve the detection rate, and reduce the rate of false alarm and fail.
Abstract: Termination Mechanism is an indispensible part of the
emergency management mechanism. Despite of its importance in both
theory and practice, it is almost a brand new field for researching. The
concept of termination mechanism is proposed firstly in this paper,
and the design and implementation which are helpful to guarantee the
effect and integrity of emergency management are discussed secondly.
Starting with introduction of the problems caused by absent
termination and incorrect termination, the essence of termination
mechanism is analyzed, a model based on Optimal Stopping Theory is
constructed and the termination index is given. The model could be
applied to find the best termination time point.. Termination decision
should not only be concerned in termination stage, but also in the
whole emergency management process, which makes it a dynamic
decision making process. Besides, the main subjects and the procedure
of termination are illustrated after the termination time point is given.
Some future works are discussed lastly.
Abstract: Simplified coupled engine block-crankshaft models
based on beam theory provide an efficient substitute to engine
simulation in the design process. These models require accurate
definition of the main bearing stiffness. In this paper, an investigation
of this stiffness is presented. The clearance effect is studied using a
smooth bearing model. It is manifested for low shaft displacement.
The hydrodynamic assessment model shows that the oil film has no
stiffness for low loads and it is infinitely rigid for important loads.
The deformation stiffness is determined using a suitable finite
elements model based on real CADs. As a result, a main bearing
behaviour law is proposed. This behaviour law takes into account the
clearance, the hydrodynamic sustention and the deformation stiffness.
It ensures properly the transition from the configuration low rigidity
to the configuration high rigidity.