Abstract: The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.
Abstract: This article considers the problem of evaluating
infinite-time (or finite-time) ruin probability under a given compound
Poisson surplus process by approximating the claim size distribution
by a finite mixture exponential, say Hyperexponential, distribution. It
restates the infinite-time (or finite-time) ruin probability as a solvable
ordinary differential equation (or a partial differential equation).
Application of our findings has been given through a simulation study.
Abstract: Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.
Abstract: The paper describes the availability analysis of milling system of a rice milling plant using probabilistic approach. The subsystems under study are special purpose machines. The availability analysis of the system is carried out to determine the effect of failure and repair rates of each subsystem on overall performance (i.e. steady state availability) of system concerned. Further, on the basis of effect of repair rates on the system availability, maintenance repair priorities have been suggested. The problem is formulated using Markov Birth-Death process taking exponential distribution for probable failures and repair rates. The first order differential equations associated with transition diagram are developed by using mnemonic rule. These equations are solved using normalizing conditions and recursive method to drive out the steady state availability expression of the system. The findings of the paper are presented and discussed with the plant personnel to adopt a suitable maintenance policy to increase the productivity of the rice milling plant.
Abstract: In this paper, the problem of a mixed-Mode crack embedded in an infinite medium made of a functionally graded piezoelectric material (FGPM) with crack surfaces subjected to electro-mechanical loadings is investigated. Eringen’s non-local theory of elasticity is adopted to formulate the governing electro-elastic equations. The properties of the piezoelectric material are assumed to vary exponentially along a perpendicular plane to the crack. Using Fourier transform, three integral equations are obtained in which the unknown variables are the jumps of mechanical displacements and electric potentials across the crack surfaces. To solve the integral equations, the unknowns are directly expanded as a series of Jacobi polynomials, and the resulting equations solved using the Schmidt method. In contrast to the classical solutions based on the local theory, it is found that no mechanical stress and electric displacement singularities are present at the crack tips when nonlocal theory is employed to investigate the problem. A direct benefit is the ability to use the calculated maximum stress as a fracture criterion. The primary objective of this study is to investigate the effects of crack length, material gradient parameter describing FGPMs, and lattice parameter on the mechanical stress and electric displacement field near crack tips.
Abstract: Big Data (BD) is associated with a new generation of technologies and architectures which can harness the value of extremely large volumes of very varied data through real time processing and analysis. It involves changes in (1) data types, (2) accumulation speed, and (3) data volume. This paper presents the main concepts related to the BD paradigm, and introduces architectures and technologies for BD and BD sets. The integration of BD with the Hadoop Framework is also underlined. BD has attracted a lot of attention in the public sector due to the newly emerging technologies that allow the availability of network access. The volume of different types of data has exponentially increased. Some applications of BD in the public sector in Romania are briefly presented.
Abstract: Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.
Abstract: We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.
Abstract: Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.
Abstract: Per capita energy usage in any country is exponentially increasing with their development. As a result, the country’s dependence on the fossil fuels for energy generation is also increasing tremendously creating economic and environmental concerns. Tropical countries receive considerable amount of solar radiation throughout the year, use of solar energy with different energy storage and conversion methodologies is a viable solution to minimize the ever increasing demand for the depleting fossil fuels. Salinity gradient solar pond is one such solar energy application. This paper reports the characteristics and performance of a thermally insulated, experimental salinity-gradient solar pond, built at the premises of the University of Kelaniya, Sri Lanka. Particular stress is given to the behavior of the evolution of the three layer structure exist at the stable state of a salinity gradient solar pond over a long period of time, under different environmental conditions. The operational procedures required to maintain the long term thermal stability are also reported in this article.
Abstract: Speed dispersion has tight relation to traffic safety. In this paper, several kinds of indicating parameters (the standard speed deviation, the coefficient of variation, the deviation of V85 and V15, the mean speed deviations, and the difference between adjacent car speeds) are applied to investigate the characteristics of speed dispersion, where V85 and V15 are 85th and 15th percentile speed, respectively. Their relationships are into full investigations and the results show that: there exists a positive relation (linear) between mean speed and the deviation of V85 and V15; while a negative relation (quadratic) between traffic flow and standard speed deviation. The mean speed deviation grows exponentially with mean speed while the absolute speed deviation between adjacent cars grows linearly with the headway. The results provide some basic information for traffic management.
Abstract: In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.
Abstract: Augmented and Virtual Realties is quickly becoming
a hotbed of activity with millions of dollars being spent on R & D
and companies such as Google and Microsoft rushing to stake their
claim. Augmented reality (AR) is however marching ahead due to the
spread of the ideal AR device – the smartphone. Despite its potential,
there remains a deep digital divide between the Developed and
Developing Countries. The Technological Acceptance Model (TAM)
and Hofstede cultural dimensions also predict the behaviour intention
to uptake AR in India will be large. This paper takes a quantified
approach by collecting 340 survey responses to AR scenarios and
analyzing them through statistics. The Survey responses show that
the Intention to Use, Perceived Usefulness and Perceived Enjoyment
dimensions are high among the urban population in India. This along
with the exponential smartphone indicates that India is on the cusp of
a boom in the AR sector.
Abstract: Magnetic Resonance Imaging Contrast Agents
(MRI-CM) are significant in the clinical and biological imaging as
they have the ability to alter the normal tissue contrast, thereby
affecting the signal intensity to enhance the visibility and detectability
of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles,
coated with dextran or carboxydextran are currently available for
clinical MR imaging of the liver. Most SPIO contrast agents are
T2 shortening agents and Resovist (Ferucarbotran) is one of a
clinically tested, organ-specific, SPIO agent which has a low
molecular carboxydextran coating. The enhancement effect of
Resovist depends on its relaxivity which in turn depends on factors
like magnetic field strength, concentrations, nanoparticle properties,
pH and temperature. Therefore, this study was conducted to
investigate the impact of field strength and different contrast
concentrations on enhancement effects of Resovist. The study
explored the MRI signal intensity of Resovist in the physiological
range of plasma from T2-weighted spin echo sequence at three
magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4,
r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast
concentrations by a mathematical simulation. Relaxivities of r1 and r2
(L mmol-1 Sec-1) were obtained from a previous study and the selected
concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were
simulated using TR/TE ratio as 2000 ms /100 ms. According to the
reference literature, with increasing magnetic field strengths, the
r1 relaxivity tends to decrease while the r2 did not show any
systematic relationship with the selected field strengths. In parallel,
this study results revealed that the signal intensity of Resovist at lower
concentrations tends to increase than the higher concentrations. The
highest reported signal intensity was observed in the low field strength
of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T
were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L,
respectively. Furthermore, it was revealed that, the concentrations
higher than the above, the signal intensity was decreased
exponentially. An inverse relationship can be found between the field
strength and T2 relaxation time, whereas, the field strength was
increased, T2 relaxation time was decreased accordingly. However,
resulted T2 relaxation time was not significantly different between
0.47 T and 1.5 T in this study. Moreover, a linear correlation of
transverse relaxation rates (1/T2, s–1) with the concentrations of
Resovist can be observed. According to these results, it can conclude
that the concentration of SPIO nanoparticle contrast agents and the
field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR
imaging those two parameters should be considered prudently.
Abstract: As one of the convenient and noninvasive sensing
approaches, the automatic limb girth measurement has been applied
to detect intention behind human motion from muscle deformation.
The sensing validity has been elaborated by preliminary researches
but still need more fundamental studies, especially on kinetic
contraction modes. Based on the novel fabric strain sensors, a soft
and smart limb girth measurement system was developed by the
authors’ group, which can measure the limb girth in-motion.
Experiments were carried out on elbow isometric flexion and elbow
isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and
120°/s for 10 subjects (2 canoeists and 8 ordinary people). After
removal of natural circumferential increments due to elbow position,
the joint torque is found not uniformly sensitive to the limb
circumferential strains, but declining as elbow joint angle rises,
regardless of the angular speed. Moreover, the maximum joint torque
was found as an exponential function of the joint’s angular speed.
This research highly contributes to the application of the automatic
limb girth measuring during kinetic contractions, and it is useful to
predict the contraction level of voluntary skeletal muscles.
Abstract: Fading noise degrades the performance of cellular
communication, most notably in femto- and pico-cells in 3G and 4G
systems. When the wireless channel consists of a small number of
scattering paths, the statistics of fading noise is not analytically
tractable and poses a serious challenge to developing closed
canonical forms that can be analysed and used in the design of
efficient and optimal receivers. In this context, noise is multiplicative
and is referred to as stochastically local fading. In many analytical
investigation of multiplicative noise, the exponential or Gamma
statistics are invoked. More recent advances by the author of this
paper utilized a Poisson modulated-weighted generalized Laguerre
polynomials with controlling parameters and uncorrelated noise
assumptions. In this paper, we investigate the statistics of multidiversity
stochastically local area fading channel when the channel
consists of randomly distributed Rayleigh and Rician scattering
centers with a coherent Nakagami-distributed line of sight component
and an underlying doubly stochastic Poisson process driven by a
lognormal intensity. These combined statistics form a unifying triply
stochastic filtered marked Poisson point process model.
Abstract: We consider the problem of stabilization of an unstable
heat equation in a 2-D, 3-D and generally n-D domain by deriving a
generalized backstepping boundary control design methodology. To
stabilize the systems, we design boundary backstepping controllers
inspired by the 1-D unstable heat equation stabilization procedure.
We assume that one side of the boundary is hinged and the other
side is controlled for each direction of the domain. Thus, controllers
act on two boundaries for 2-D domain, three boundaries for 3-D
domain and ”n” boundaries for n-D domain. The main idea of the
design is to derive ”n” controllers for each of the dimensions by
using ”n” kernel functions. Thus, we obtain ”n” controllers for the
”n” dimensional case. We use a transformation to change the system
into an exponentially stable ”n” dimensional heat equation. The
transformation used in this paper is a generalized Volterra/Fredholm
type with ”n” kernel functions for n-D domain instead of the one
kernel function of 1-D design.
Abstract: E-retailing is the sale of goods online that takes place
over the Internet. The Internet has shrunk the entire World. World eretailing
is growing at an exponential rate in the Americas, Europe
and Asia. However, e-retailing costs require expensive investment,
such as hardware, software, and security systems. Cloud computing
technology is internet-based computing for the management and
delivery of applications and services. Cloud-based e-retailing
application models allow enterprises to lower their costs with their
effective implementation of e-retailing activities. In this paper, we
describe the concept of cloud computing and present the architecture
of cloud computing, combining the features of e-retailing. In
addition, we propose a strategy for implementing cloud computing
with e-retailing. Finally, we explain the benefits from the
architecture.
Abstract: The purpose of this project is to propose a quick and
environmentally friendly alternative to measure the quality of oils
used in food industry. There is evidence that repeated and
indiscriminate use of oils in food processing cause physicochemical
changes with formation of potentially toxic compounds that can
affect the health of consumers and cause organoleptic changes. In
order to assess the quality of oils, non-destructive optical techniques
such as Interferometry offer a rapid alternative to the use of reagents,
using only the interaction of light on the oil. Through this project, we
used interferograms of samples of oil placed under different heating
conditions to establish the changes in their quality. These
interferograms were obtained by means of a Mach-Zehnder
Interferometer using a beam of light from a HeNe laser of 10mW at
632.8nm. Each interferogram was captured, analyzed and measured
full width at half-maximum (FWHM) using the software from
Amcap and ImageJ. The total of FWHMs was organized in three
groups. It was observed that the average obtained from each of the
FWHMs of group A shows a behavior that is almost linear, therefore
it is probable that the exposure time is not relevant when the oil is
kept under constant temperature. Group B exhibits a slight
exponential model when temperature raises between 373 K and 393
K. Results of the t-Student show a probability of 95% (0.05) of the
existence of variation in the molecular composition of both samples.
Furthermore, we found a correlation between the Iodine Indexes
(Physicochemical Analysis) and the Interferograms (Optical
Analysis) of group C. Based on these results, this project highlights
the importance of the quality of the oils used in food industry and
shows how Interferometry can be a useful tool for this purpose.
Abstract: Current systems complexity has reached a degree that
requires addressing conception and design issues while taking into
account environmental, operational, social, legal and financial
aspects. Therefore, one of the main challenges is the way complex
systems are specified and designed. The exponential growing effort,
cost and time investment of complex systems in modeling phase
emphasize the need for a paradigm, a framework and an environment
to handle the system model complexity. For that, it is necessary to
understand the expectations of the human user of the model and his
limits. This paper presents a generic framework for designing
complex systems, highlights the requirements a system model needs
to fulfill to meet human user expectations, and suggests a graphbased
formalism for modeling complex systems. Finally, a set of
transformations are defined to handle the model complexity.