Abstract: Transportation is of great importance in the current
life of human beings. The transportation system plays many roles,
from economical development to after-catastrophe aids such as
rescue operation in the first hours and days after an earthquake. In
after earthquakes response phase, transportation system acts as a
basis for ground operations including rescue and relief operation,
food providing for victims and etc. It is obvious that partial or
complete obstruction of this system results in the stop of these
operations. Bridges are one of the most important elements of
transportation network. Failure of a bridge, in the most optimistic
case, cuts the relation between two regions and in more developed
countries, cuts the relation of numerous regions. In this paper, to
evaluate the vulnerability and estimate the damage level of Tehran
bridges, HAZUS method, developed by Federal Emergency
Management Agency (FEMA) with the aid of National Institute of
Building Science (NIBS), is used for the first time in Iran. In this
method, to evaluate the collapse probability, fragility curves are
used. Iran is located on seismic belt and thus, it is vulnerable to
earthquakes. Thus, the study of the probability of bridge collapses, as
an important part of transportation system, during earthquakes is of
great importance. The purpose of this study is to provide fragility
curves for Gisha Bridge, one of the longest steel bridges in Tehran,
as an important lifeline element. Besides, the damage probability for
this bridge during a specific earthquake, introduced as scenario
earthquakes, is calculated. The fragility curves show that for the
considered scenario, the probability of occurrence of complete
collapse for the bridge is 8.6%.
Abstract: The knowledge of the nature of loading is very
important in order to hold account on the total behavior such as
vibration, shock, fatigue, etc. Fatigue present 90% of failure when
loadings fatigues are very complex. In this paper a study of double
through crack at hole for plate subjected to fatigue loading is
presented. Various modes loading are studied where the applied load
is the same one. The fatigue life is given where the effect of stress
ratio is highlighted. This work is conducted on aluminum alloy 2024
T351 used for much aerospace and aeronautics applications. The
fatigue crack growth behavior with constant amplitude is studied
using the AFGROW code when Forman model is applied. The
fatigue crack growth rate and fatigue life for different loading modes
are compared with variation of others geometrical parameter such as
thickness and dimensions of notch hole.
Abstract: Non-Destructive evaluation of in-service power
transformer condition is necessary for avoiding catastrophic failures.
Dissolved Gas Analysis (DGA) is one of the important methods.
Traditional, statistical and intelligent DGA approaches have been
adopted for accurate classification of incipient fault sources.
Unfortunately, there are not often enough faulty patterns required for
sufficient training of intelligent systems. By bootstrapping the
shortcoming is expected to be alleviated and algorithms with better
classification success rates to be obtained. In this paper the
performance of an artificial neural network, K-Nearest Neighbour
and support vector machine methods using bootstrapped data are
detailed and shown that while the success rate of the ANN algorithms
improves remarkably, the outcome of the others do not benefit so
much from the provided enlarged data space. For assessment, two
databases are employed: IEC TC10 and a dataset collected from
reported data in papers. High average test success rate well exhibits
the remarkable outcome.
Abstract: This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Abstract: Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.
Abstract: With the extensive inclusion of document, especially
text, in the business systems, data mining does not cover the full
scope of Business Intelligence. Data mining cannot deliver its impact
on extracting useful details from the large collection of unstructured
and semi-structured written materials based on natural languages.
The most pressing issue is to draw the potential business intelligence
from text. In order to gain competitive advantages for the business, it
is necessary to develop the new powerful tool, text mining, to expand
the scope of business intelligence.
In this paper, we will work out the strong points of text mining in
extracting business intelligence from huge amount of textual
information sources within business systems. We will apply text
mining to each stage of Business Intelligence systems to prove that
text mining is the powerful tool to expand the scope of BI. After
reviewing basic definitions and some related technologies, we will
discuss the relationship and the benefits of these to text mining. Some
examples and applications of text mining will also be given. The
motivation behind is to develop new approach to effective and
efficient textual information analysis. Thus we can expand the scope
of Business Intelligence using the powerful tool, text mining.
Abstract: All the available algorithms for blind estimation namely constant modulus algorithm (CMA), Decision-Directed Algorithm (DDA/DFE) suffer from the problem of convergence to local minima. Also, if the channel drifts considerably, any DDA looses track of the channel. So, their usage is limited in varying channel conditions. The primary limitation in such cases is the requirement of certain overhead bits in the transmit framework which leads to wasteful use of the bandwidth. Also such arrangements fail to use channel state information (CSI) which is an important aid in improving the quality of reception. In this work, the main objective is to reduce the overhead imposed by the pilot symbols, which in effect reduces the system throughput. Also we formulate an arrangement based on certain dynamic Artificial Neural Network (ANN) topologies which not only contributes towards the lowering of the overhead but also facilitates the use of the CSI. A 2×2 Multiple Input Multiple Output (MIMO) system is simulated and the performance variation with different channel estimation schemes are evaluated. A new semi blind approach based on dynamic ANN is proposed for channel tracking in varying channel conditions and the performance is compared with perfectly known CSI and least square (LS) based estimation.
Abstract: The nonlinear damping behavior is usually ignored in
the design of a miniature moving-coil loudspeaker. But when the
loudspeaker operated in air, the damping parameter varies with the
voice-coil displacement corresponding due to viscous air flow. The
present paper presents an identification model as inverse problem to
identify the nonlinear damping parameter in the lumped parameter
model for the loudspeaker. Theoretical results for the nonlinear
damping are verified by using laser displacement measurement
scanner. These results indicate that the damping parameter has the
greatly different nonlinearity between in air and vacuum. It is believed
that the results of the present work can be applied in diagnosis and
sound quality improvement of a miniature loudspeaker.
Abstract: Never has a revolution affected all aspects of
humanity as the communication revolution during the past two
decades. This revolution, with all its advances and utilities, swept the
world thus becoming an integral part of our lives, hence giving way
to emerging applications at the social, economic, political, and
educational levels. More specifically, such applications have changed
the delivery system through which learning is acquired by students.
Interaction with educators, accessibility to content, and creative
delivery options are but a few facets of the new learning experience
now being offered through the use of technology in the educational
field. With different success rates, third world countries have tried to
pace themselves with use of educational technology in advanced
parts of the world. One such country is the small rich-oil state of
Kuwait which has tried to adopt the e-educational model, however,
an evaluation of such trial is yet to be done. This study aimed to fill
the void of research conducted around that topic. The study explored
students' acceptance of incorporating communication technologies in
higher education in Kuwait. Students' responses to survey questions
presented an overview of the e-learning experience in this country,
and drew a framework through which implications and suggestions
for future research were discussed to better serve the advancement of
e-education in developing countries.
Abstract: Risk response planning is of importance for software project risk management (SPRM). In CMMI, risk management was in the third capability maturity level, which provides a framework for software project risk identification, assessment, risk planning, risk control. However, the CMMI-based SPRM currently lacks quantitative supporting tools, especially during the process of implementing software project risk planning. In this paper, an economic optimization model for selecting risk reduction actions in the phase of software project risk response planning is presented. Furthermore, an example taken from a Chinese software industry is illustrated to verify the application of this method. The research provides a risk decision method for project risk managers that can be used in the implementation of CMMI-based SPRM.
Abstract: This research work is concerned with the eigenvalue problem for the integral operators which are obtained by linearization of a nonlocal evolution equation. The purpose of section II.A is to describe the nature of the problem and the objective of the project. The problem is related to the “stable solution" of the evolution equation which is the so-called “instanton" that describe the interface between two stable phases. The analysis of the instanton and its asymptotic behavior are described in section II.C by imposing the Green function and making use of a probability kernel. As a result , a classical Theorem which is important for an instanton is proved. Section III devoted to a study of the integral operators related to interface dynamics which concern the analysis of the Cauchy problem for the evolution equation with initial data close to different phases and different regions of space.
Abstract: In this work, Experimental tie-line results and
solubility (binodal) curves were obtained for the ternary systems
(water + acetic acid + methyl isobutyl ketone (MIBK)), (water +
lactic acid+ methyl isobutyl ketone) at T = 294.15K and atmospheric
pressure. The consistency of the values of the experimental tie-lines
was determined through the Othmer-Tobias and Hands correlations.
For the extraction effectiveness of solvents, the distribution and
selectivity curves were plotted. In addition, these experimental tieline
data were also correlated with NRTL model. The interaction
parameters for the NRTL model were retrieved from the obtained
experimental results by means of a combination of the homotopy
method and the genetic algorithms.
Abstract: In this paper we present high performance
dynamically allocated multi-queue (DAMQ) buffer schemes for fault
tolerance systems on chip applications that require an interconnection
network. Two virtual channels shared the same buffer space. Fault
tolerant mechanisms for interconnection networks are becoming a
critical design issue for large massively parallel computers. It is also
important to high performance SoCs as the system complexity keeps
increasing rapidly. On the message switching layer, we make
improvement to boost system performance when there are faults
involved in the components communication. The proposed scheme is
when a node or a physical channel is deemed as faulty, the previous
hop node will terminate the buffer occupancy of messages destined
to the failed link. The buffer usage decisions are made at switching
layer without interactions with higher abstract layer, thus buffer
space will be released to messages destined to other healthy nodes
quickly. Therefore, the buffer space will be efficiently used in case
fault occurs at some nodes.
Abstract: At present, intelligent planning in the Graphplan framework is a focus of artificial intelligence. While the Creating or Destroying Objects Planning (CDOP) is one unsolved problem of this field, one of the difficulties, too. In this paper, we study this planning problem and bring forward the idea of transforming objects to propositions, based on which we offer an algorithm, Creating or Destroying Objects in the Graphplan framework (CDOGP). Compared to Graphplan, the new algorithm can solve not only the entire problems that Graphplan do, but also a part of CDOP. It is for the first time that we introduce the idea of object-proposition, and we emphasize the discussion on the representations of creating or destroying objects operator and an algorithm in the Graphplan framework. In addition, we analyze the complexity of this algorithm.
Abstract: Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Abstract: In this article, we expose our research work in
Human-machine Interaction. The research consists in manipulating
the workspace by eyes. We present some of our results, in particular
the detection of eyes and the mouse actions recognition. Indeed, the
handicaped user becomes able to interact with the machine in a more
intuitive way in diverse applications and contexts. To test our
application we have chooses to work in real time on videos captured
by a camera placed in front of the user.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: The study examines the determinants of corporate cash holding of non-financial quoted firms in Nigeria using a sample of fifty four non-financial quoted firms listed on the Nigeria Stock Exchange for the period 1995-2009. Data were sourced from the Annual reports of the sampled firms and analyzed using Generalized Method of Moments(GMM). The study finds evidence supportive of a target adjustment model and that firms can not instantaneously adjust towards the target cash level owing to the fact that adjustment cost being costly,. Also, the result shows significant negative relationship between cash holdings and firm size, net working capital, return on asset and bank relationship and positive relationship with growth opportunities, leverage, inventories, account receivables and financial distress. Furthermore, there is no significant relationship between cash holdings and cash flow. In Nigerian setting, most of the variables that are relevant for explaining cash holdings in the Developed countries are found by this study to be relevant also in Nigeria.
Abstract: The geometric errors in the manufacturing process can
be reduced by optimal positioning of the fixture elements in the
fixture to make the workpiece stiff. We propose a new fixture layout
optimization method N-3-2-1 for large metal sheets in this paper that
combines the genetic algorithm and finite element analysis. The
objective function in this method is to minimize the sum of the nodal
deflection normal to the surface of the workpiece. Two different
kinds of case studies are presented, and optimal position of the
fixturing element is obtained for different cases.
Abstract: This paper aims at a new challenge of customer
satisfaction on mobile customer relationship management. In this
paper presents a conceptualization of mCRM on its unique
characteristics of customer satisfaction. Also, this paper develops an
empirical framework in conception of customer satisfaction in
mCRM. A single-case study is applied as the methodology. In order to
gain an overall view of the empirical case, this paper accesses to
invisible and important information of company in this investigation.
Interview is the key data source form the main informants of the
company through which the issues are identified and the proposed
framework is built. It supports the development of customer
satisfaction in mCRM; links this theoretical framework into practice;
and provides the direction for future research. Therefore, this paper is
very useful for the industries as it helps them to understand how
customer satisfaction changes the mCRM structure and increase the
business competitive advantage. Finally, this paper provides a
contribution in practice by linking a theoretical framework in
conception of customer satisfaction in mCRM for companies to a
practical real case.