Abstract: The present paper deals with the experimental and
computational study of axial collapse of the aluminum metallic shells
having combined tube-frusta geometry between two parallel plates.
Shells were having bottom two third lengths as frusta and remaining
top one third lengths as tube. Shells were compressed to recognize
their modes of collapse and associated energy absorption capability.
An axisymmetric Finite Element computational model of collapse
process is presented and analysed, using a non-linear FE code
FORGE2. Six noded isoparametric triangular elements were used to
discretize the deforming shell. The material of the shells was
idealized as rigid visco-plastic. To validate the computational model
experimental and computed results of the deformed shapes and their
corresponding load-compression and energy-compression curves
were compared. With the help of the obtained results progress of the
axisymmetric mode of collapse has been presented, analysed and
discussed.
Abstract: The problem of agricultural-soil pollution is closely
linked to the production of ecologically pure foodstuffs and to human health. An important task, therefore, is to rehabilitate agricultural
soils with the help of state-of-the-art biotechnologies, based on the use of metal-accumulating plants. In this work, on the basis of
literature data and the results of prior research from this laboratory, plants were selected for which the growing technology is well
developed and which are widespread locally: sugar sorghum (Sorghum saccharatum), sudangrass (Sorghum sudanense (Piper.)
Stapf.), and sunflower (Helianthus annuus L.). I report on laboratory
experiments designed to study the influence of synthetic indole-3-
acetic acid and the extracellular indole-3-acetic acid released by the
plant-growth-promoting rhizobacterium Azospirillum brasilense Sp245 on growth of and arsenic accumulation by these plants.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
three feature selection methods are evaluated: Random Selection,
Information Gain (IG) and Support Vector Machine feature selection
(called SVM_FS). We show that the best results were obtained with
SVM_FS method for a relatively small dimension of the feature
vector. Also we present a novel method to better correlate SVM
kernel-s parameters (Polynomial or Gaussian kernel).
Abstract: Mechanical buckling analysis of rectangular plates
with central circular cutout is performed in this paper. The finiteelement
method is used to study the effects of plate-support
conditions, aspect ratio, and hole size on the mechanical buckling
strength of the perforated plates subjected to linearly varying loading.
Results show that increasing the hole size does not necessarily reduce
the mechanical buckling strength of the perforated plates. It is also
concluded that the clamped boundary condition increases the
mechanical buckling strength of the perforated plates more than the
simply-supported boundary condition and the free boundary
conditions enhance the mechanical buckling strength of the
perforated plates more effectively than the fixed boundary conditions.
Furthermore, for the bending cases, the critical buckling load of
perforated plates with free edges is less than perforated plates with
fixed edges.
Abstract: Digital broadcasting has been an area of active
research, development, innovation and business models development
in recent years. This paper presents a survey on the characteristics of
the digital terrestrial television broadcasting (DTTB) standards, and
implementation status of DTTB worldwide showing the standards
adopted. It is clear that only the developed countries and some in the
developing ones shall be able to beat the ITU set analogue to digital
broadcasting migration deadline because of the challenges that these
countries faces in digitizing their terrestrial broadcasting. The
challenges to keep on track the DTTB migration plan are also
discussed in this paper. They include financial, technology gap,
policies alignment with DTTB technology, etc. The reported
performance comparisons for the different standards are also
presented. The interesting part is that the results for many
comparative studies depends to a large extent on the objective behind
such studies, hence counter claims are common.
Abstract: Copolymerization of ethylene with 1-hexene was
carried out using two ansa-fluorenyl titanium derivative complexes.
The substituent effect on the catalytic activity, monomer reactivity
ratio and polymer property was investigated. It was found that the
presence of t-Bu groups on fluorenyl ring exhibited remarkable
catalytic activity and produced polymer with high molecular weight.
However, these catalysts produce polymer with narrow molecular
weight distribution, indicating the characteristic of single-site
metallocene catalyst. Based on 13C NMR, we can observe that
monomer reactivity ratio was affected by catalyst structure. The rH
values of complex 2 were lower than that of complex 1 which might
be result from the higher steric hindrance leading to a reduction of 1-
hexene insertion step.
Abstract: Time series forecasting is an important and widely
popular topic in the research of system modeling. This paper
describes how to use the hybrid PSO-RLSE neuro-fuzzy learning
approach to the problem of time series forecasting. The PSO
algorithm is used to update the premise parameters of the
proposed prediction system, and the RLSE is used to update the
consequence parameters. Thanks to the hybrid learning (HL)
approach for the neuro-fuzzy system, the prediction performance
is excellent and the speed of learning convergence is much faster
than other compared approaches. In the experiments, we use the
well-known Mackey-Glass chaos time series. According to the
experimental results, the prediction performance and accuracy in
time series forecasting by the proposed approach is much better
than other compared approaches, as shown in Table IV. Excellent
prediction performance by the proposed approach has been
observed.
Abstract: The paper represents a reflection on how to select proper indicators to assess the progress of regional contexts towards a knowledge-based society. Taking the first research methodologies elaborated at an international level (World Bank, OECD, etc.) as a reference point, this work intends to identify a set of indicators of the knowledge economy suitable to adequately understand in which manner and to which extent the territorial development dynamics are correlated with the knowledge-base of the considered local society. After a critical survey of the variables utilized within other approaches adopted by international or national organizations, this paper seeks to elaborate a framework of variables, named Regional Knowledge Economy Indicators (ReKEI), necessary to describe the knowledge-based relations of subnational socio-economic contexts. The realization of this framework has a double purpose: an analytical one consisting in highlighting the regional differences in the governance of knowledge based processes, and an operative one consisting in providing some reference parameters for contributing to increasing the effectiveness of those economic policies aiming at enlarging the knowledge bases of local societies.
Abstract: The main objective of this study was to remove and recover Ni, Cu and Fe from a mixed metal system using sodium hypophosphite as a reducing agent and nickel powder as seeding material. The metal systems studied consisted of Ni-Cu, Ni-Fe and Ni-Cu-Fe solutions. A 5 L batch reactor was used to conduct experiments where 100 mg/l of each respective metal was used. It was found that the metals were reduced to their elemental form with removal efficiencies of over 80%. The removal efficiency decreased in the order Fe>Ni>Cu. The metal powder obtained contained between 97-99% Ni and was almost spherical and porous. Size enlargement by aggregation was the dominant particulate process.
Abstract: Cardiovascular diseases, principally atherosclerosis, are responsible for 30% of world deaths. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis.It is increasingly recognized that the initiation and progression of disease and the occurrence of clinical events is a complex interplay between the local biomechanical environment and the local vascular biology. The aim of this study is to investigate the flow behavior through a stenosed artery. A physical experiment was performed using an artery model and blood analogue fluid. An axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. The flow field was measured using particle image velocimetry (PIV). Spherical particles with 20μm diameter were seeded in a water-glycerol-NaCl mixture. Steady flow Reynolds numbers are 250. The area of interest is the region after the stenosis where the flow separation occurs. The velocity field was measured and the velocity gradient was investigated. There was high particle concentration in the recirculation zone. High velocity gradient formed immediately after the stenosis throat created a lift force that enhanced particle migration to the flow separation area.
Abstract: Static analysis of source code is used for auditing web
applications to detect the vulnerabilities. In this paper, we propose a
new algorithm to analyze the PHP source code for detecting LFI and
RFI potential vulnerabilities. In our approach, we first define some
patterns for finding some functions which have potential to be abused
because of unhandled user inputs. More precisely, we use regular
expression as a fast and simple method to define some patterns for
detection of vulnerabilities. As inclusion functions could be also used
in a safe way, there could occur many false positives (FP). The first
cause of these FP-s could be that the function does not use a usersupplied
variable as an argument. So, we extract a list of usersupplied
variables to be used for detecting vulnerable lines of code.
On the other side, as vulnerability could spread among the variables
like by multi-level assignment, we also try to extract the hidden usersupplied
variables. We use the resulted list to decrease the false
positives of our method. Finally, as there exist some ways to prevent
the vulnerability of inclusion functions, we define also some patterns
to detect them and decrease our false positives.
Abstract: With the development of technology, the growing
trend of fast and safe passenger transport, air pollution, traffic
congestion, increase in problems such as the increasing population
and the high cost of private vehicle usage made many cities around
the world with a population of more or less, start to build rail systems
as a means of urban transport in order to ensure the economic and
environmental sustainability and more efficient use of land in the
city. The implementation phase of rail systems costs much more than
other public transport systems. However, social and economic returns
in the long term made these systems the most popular investment tool
for planned and developing cities.
In our country, the purpose, goals and policies of transportation
plans are away from integrity, and the problems are not clearly
detected. Also, not defined and incomplete assessment of
transportation systems and insufficient financial analysis are the most
important cause of failure. Rail systems and other transportation
systems to be addressed as a whole is seen as the main factor in
increasing efficiency in applications that are not integrated yet in our
country to come to this point has led to the problem.
Abstract: In the closed quantum system, if the control system is
strongly regular and all other eigenstates are directly coupled to the
target state, the control system can be asymptotically stabilized at the
target eigenstate by the Lyapunov control based on the state error.
However, if the control system is not strongly regular or as long as
there is one eigenstate not directly coupled to the target state, the
situations will become complicated. In this paper, we propose an
implicit Lyapunov control method based on the state error to solve the
convergence problems for these two degenerate cases. And at the same
time, we expand the target state from the eigenstate to the arbitrary
pure state. Especially, the proposed method is also applicable in the
control system with multi-control Hamiltonians. On this basis, the
convergence of the control systems is analyzed using the LaSalle
invariance principle. Furthermore, the relation between the implicit
Lyapunov functions of the state distance and the state error is
investigated. Finally, numerical simulations are carried out to verify
the effectiveness of the proposed implicit Lyapunov control method.
The comparisons of the control effect using the implicit Lyapunov
control method based on the state distance with that of the state error
are given.
Abstract: The purpose of our study was to compare spontaneous
re-epithelisation characteristics versus assisted re-epithelisation. In
order to assess re-epithelisation of the injured skin, we have imagined
and designed a burn wound model on Wistar rat skin. Our aim was to
create standardised, easy reproducible and quantifiable skin lesions
involving entire epidermis and superficial dermis. We then have
applied the above mentioned therapeutic strategies to compare
regeneration of epidermis and dermis, local and systemic parameter
changes in different conditions. We have enhanced the reepithelisation
process under a moist atmosphere of a polyurethane
wound dress modified with helium non-thermal plasma, and with the
aid of direct cold-plasma treatment respectively. We have followed
systemic parameters change: hematologic and biochemical
parameters, and local features: oxidative stress markers and histology
of skin in the above mentioned conditions. Re-epithelisation is just a
part of the skin regeneration process, which recruits cellular
components, with the aid of epidermal and dermal interaction via
signal molecules.
Abstract: There is a complex situation on the transport environment in the cities of the world. For the analysis and prevention of environmental problems an accurate calculation hazardous substances concentrations at each point of the investigated area is required. In the turbulent atmosphere of the city the wellknown methods of mathematical statistics for these tasks cannot be applied with a satisfactory level of accuracy. Therefore, to solve this class of problems apparatus of mathematical physics is more appropriate. In such models, because of the difficulty as a rule the influence of uneven land surface on streams of air masses in the turbulent atmosphere of the city are not taken into account. In this paper the influence of the surface roughness, which can be quite large, is mathematically shown. The analysis of this problem under certain conditions identified the possibility of areas appearing in the atmosphere with pressure tending to infinity, i.e. so-called "wall effect".
Abstract: This paper presents a novel template-based method to
detect objects of interest from real images by shape matching. To
locate a target object that has a similar shape to a given template
boundary, the proposed method integrates three components: contour
grouping, partial shape matching, and boundary verification. In the
first component, low-level image features, including edges and
corners, are grouped into a set of perceptually salient closed contours
using an extended ratio-contour algorithm. In the second component,
we develop a partial shape matching algorithm to identify the
fractions of detected contours that partly match given template
boundaries. Specifically, we represent template boundaries and
detected contours using landmarks, and apply a greedy algorithm to
search the matched landmark subsequences. For each matched
fraction between a template and a detected contour, we estimate an
affine transform that transforms the whole template into a hypothetic
boundary. In the third component, we provide an efficient algorithm
based on oriented edge lists to determine the target boundary from
the hypothetic boundaries by checking each of them against image
edges. We evaluate the proposed method on recognizing and
localizing 12 template leaves in a data set of real images with clutter
back-grounds, illumination variations, occlusions, and image noises.
The experiments demonstrate the high performance of our proposed
method1.
Abstract: Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.
Abstract: A new nonlinear PID controller and its stability
analysis are presented in this paper. A nonlinear function is deduced
from the similarities between the control effort and the electric-field
effect of a capacitor. The conventional linear PID controller can be
modified into a nonlinear one by this function. To analyze the stability
of the nonlinear PID controlled system, an idea of energy equivalence
is adapted to avoid the conservativeness which is usually arisen from
some traditional theorems and Criterions. The energy equivalence is
naturally related with the conceptions of Passivity and T-Passivity. As
a result, an engineering guideline for the parameter design of the
nonlinear PID controller is obtained. An inverted pendulum system is
tested to verify the nonlinear PID control scheme.
Abstract: The recent advances in computational fluid dynamics
(CFD) can be useful in observing the detailed hemodynamics in
cerebral aneurysms for understanding not only their formation and
rupture but also for clinical evaluation and treatment. However,
important hemodynamic quantities are difficult to measure in vivo. In
the present study, an approximate model of normal middle cerebral
artery (MCA) along with two cases consisting broad and narrow
saccular aneurysms are analyzed. The models are generated in
ANSYS WORKBENCH and transient analysis is performed in
ANSYS-CFX. The results obtained are compared for three cases and
agree well with the available literature.
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.