Abstract: According to the interaction of inflation and
unemployment, expectation of the rate of inflation in Croatia is
estimated. The interaction between inflation and unemployment is
shown by model based on three first-order differential i.e. difference
equations: Phillips relation, adaptive expectations equation and
monetary-policy equation. The resulting equation is second order
differential i.e. difference equation which describes the time path of
inflation. The data of the rate of inflation and the rate of
unemployment are used for parameters estimation. On the basis of
the estimated time paths, the stability and convergence analysis is
done for the rate of inflation.
Abstract: The sand production problem has led researchers into making various attempts to understand the phenomenon. The generally accepted concept is that the occurrence of sanding is due to the in-situ stress conditions and the induced changes in stress that results in the failure of the reservoir sandstone during hydrocarbon production from wellbores. By using a hypothetical cased (perforated) well, an approach to the problem is presented here by using Finite Element numerical modelling techniques. In addition to the examination of the erosion problem, the influence of certain key parameters is studied in order to ascertain their effect on the failure and subsequent erosion process. The major variables investigated include: drawdown, perforation depth, and the erosion criterion. Also included is the determination of the optimal mud pressure for given operational and reservoir conditions. The improved understanding between parameters enables the choice of optimal values to minimize sanding during oil production.
Abstract: In recent years, most of the regions in the world are
exposed to degradation and erosion caused by increasing
population and over use of land resources. The understanding of
the most important factors on soil erosion and sediment yield are
the main keys for decision making and planning. In this study, the
sediment yield and soil erosion were estimated and the priority of
different soil erosion factors used in the MPSIAC method of soil
erosion estimation is evaluated in AliAbad watershed in southwest
of Isfahan Province, Iran. Different information layers of the
parameters were created using a GIS technique. Then, a
multivariate procedure was applied to estimate sediment yield and
to find the most important factors of soil erosion in the model. The
results showed that land use, geology, land and soil cover are the
most important factors describing the soil erosion estimated by
MPSIAC model.
Abstract: This paper examines predictability in stock return in
developed and emergingmarkets by testing long memory in stock
returns using wavelet approach. Wavelet-based maximum likelihood
estimator of the fractional integration estimator is superior to the
conventional Hurst exponent and Geweke and Porter-Hudak
estimator in terms of asymptotic properties and mean squared error.
We use 4-year moving windows to estimate the fractional integration
parameter. Evidence suggests that stock return may not be predictable
indeveloped countries of the Asia-Pacificregion. However,
predictability of stock return insome developing countries in this
region such as Indonesia, Malaysia and Philippines may not be ruled
out. Stock return in the Thailand stock market appears to be not
predictable after the political crisis in 2008.
Abstract: The characterization of κ-carrageenan could provide a
better understanding of its functions in biological, medical and
industrial applications. Chemical and physical analyses of
carrageenan from seaweeds, Euchema cottonii L., were done to offer
information on its properties and the effects of Co-60 γ-irradiation on
its thermochemical characteristics. The structural and morphological
characteristics of κ-carrageenan were determined using scanning
electron microscopy (SEM) while the composition, molecular weight
and thermal properties were determined using attenuated total
reflectance Fourier transform infrared spectroscopy (ATR-FTIR), gel
permeation chromatography (GPC), thermal gravimetric analysis
(TGA) and differential scanning calorimetry (DSC). Further chemical
analysis was done using hydrogen-1 nuclear magnetic resonance (1H
NMR) and functional characteristics in terms of biocompatibility
were evaluated using cytotoxicity test.
Abstract: The Chinese Postman Problem (CPP) is one of the
classical problems in graph theory and is applicable in a wide range
of fields. With the rapid development of hybrid systems and model
based testing, Chinese Postman Problem with Time Dependent Travel
Times (CPPTDT) becomes more realistic than the classical problems.
In the literature, we have proposed the first integer programming
formulation for the CPPTDT problem, namely, circuit formulation,
based on which some polyhedral results are investigated and a cutting
plane algorithm is also designed. However, there exists a main drawback:
the circuit formulation is only available for solving the special
instances with all circuits passing through the origin. Therefore, this
paper proposes a new integer programming formulation for solving
all the general instances of CPPTDT. Moreover, the size of the circuit
formulation is too large, which is reduced dramatically here. Thus, it
is possible to design more efficient algorithm for solving the CPPTDT
in the future research.
Abstract: This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Abstract: This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS) combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: A great deal of research works in the field information
systems security has been based on a positivist paradigm. Applying
the reductionism concept of the positivist paradigm for information
security means missing the bigger picture and thus, the lack of holism
which could be one of the reasons why security is still overlooked,
comes as an afterthought or perceived from a purely technical
dimension. We need to reshape our thinking and attitudes towards
security especially in a complex and dynamic environment such as e-
Business to develop a holistic understanding of e-Business security in
relation to its context as well as considering all the stakeholders in
the problem area. In this paper we argue the suitability and need for
more inductive interpretive approach and qualitative research method
to investigate e-Business security. Our discussion is based on a
holistic framework of enquiry, nature of the research problem, the
underling theoretical lens and the complexity of e-Business
environment. At the end we present a research strategy for
developing a holistic framework for understanding of e-Business
security problems in the context of developing countries based on an
interdisciplinary inquiry which considers their needs and
requirements.
Abstract: How to efficiently assign system resource to route the
Client demand by Gateway servers is a tricky predicament. In this
paper, we tender an enhanced proposal for autonomous recital of
Gateway servers under highly vibrant traffic loads. We devise a
methodology to calculate Queue Length and Waiting Time utilizing
Gateway Server information to reduce response time variance in
presence of bursty traffic.
The most widespread contemplation is performance, because
Gateway Servers must offer cost-effective and high-availability
services in the elongated period, thus they have to be scaled to meet
the expected load. Performance measurements can be the base for
performance modeling and prediction. With the help of performance
models, the performance metrics (like buffer estimation, waiting
time) can be determined at the development process.
This paper describes the possible queue models those can be
applied in the estimation of queue length to estimate the final value
of the memory size. Both simulation and experimental studies using
synthesized workloads and analysis of real-world Gateway Servers
demonstrate the effectiveness of the proposed system.
Abstract: The objective of this research is parameters optimized
of the stair shape workpiece which is cut by CNC Wire-Cut EDM
(WEDW). The experiment material is SKD-11 steel of stair-shaped
with variable height workpiece 10, 20, 30 and 40 mm. with the same
10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model
AD325L.
The experiments are designed by 3k full factorial experimental
design at 3 level 2 factors and 9 experiments with 2 replicate. The
selected two factor are servo voltage (SV) and servo feed rate (SF)
and the response is cutting thickness error. The experiment is divided
in two experiments. The first experiment determines the significant
effective factor at confidential interval 95%. The SV factor is the
significant effective factor from first result. In order to result smallest
cutting thickness error of workpieces is 17 micron with the SV value
is 46 volt. Also show that the lower SV value, the smaller different
thickness error of workpiece. Then the second experiment is done to
reduce different cutting thickness error of workpiece as small as
possible by lower SV. The second experiment result show the
significant effective factor at confidential interval 95% is the SV
factor and the smallest cutting thickness error of workpieces reduce
to 11 micron with the experiment SV value is 36 volt.
Abstract: A green design for assembly model is presented to
integrate design evaluation and assembly and disassembly sequence
planning by evaluating the three activities in one integrated model. For
an assembled product, an assembly sequence planning model is
required for assembling the product at the start of the product life cycle.
A disassembly sequence planning model is needed for disassembling
the product at the end. In a green product life cycle, it is important to
plan how a product can be disassembled, reused, or recycled, before
the product is actually assembled and produced. Given a product
requirement, there may be several design alternative cases to design
the same product. In the different design cases, the assembly and
disassembly sequences for producing the product can be different. In
this research, a new model is presented to concurrently evaluate the
design and plan the assembly and disassembly sequences. First, the
components are represented by using graph based models. Next, a
particle swarm optimization (PSO) method with a new encoding
scheme is developed. In the new PSO encoding scheme, a particle is
represented by a position matrix defining an assembly sequence and a
disassembly sequence. The assembly and disassembly sequences can
be simultaneously planned with an objective of minimizing the total of
assembly costs and disassembly costs. The test results show that the
presented method is feasible and efficient for solving the integrated
design evaluation and assembly and disassembly sequence planning
problem. An example product is implemented and illustrated in this
paper.
Abstract: The great majority of the electric installations belong
to the first and second category. In order to ensure a high level of
reliability of their electric system feeder, two power supply sources
are envisaged, one principal, the other of reserve, generally a cold
reserve (electric diesel group).
The principal source being under operation, its control can be ideal
and sure, however for the reserve source being in stop, a preventive
maintenance-s which proceeds on time intervals (periodicity) and
for well defined lengths of time are envisaged, so that this source will
always available in case of the principal source failure.
The choice of the periodicity of preventive maintenance of the
source of reserve influences directly the reliability of the electric
feeder system. On the basis of the semi-markovians processes, the
influence of the periodicity of the preventive maintenance of the
source of reserve is studied and is given the optimal periodicity.
Abstract: In this work, we present for the first time in our
perception an efficient digital watermarking scheme for mpeg audio
layer 3 files that operates directly in the compressed data domain,
while manipulating the time and subband/channel domain. In
addition, it does not need the original signal to detect the watermark.
Our scheme was implemented taking special care for the efficient
usage of the two limited resources of computer systems: time and
space. It offers to the industrial user the capability of watermark
embedding and detection in time immediately comparable to the real
music time of the original audio file that depends on the mpeg
compression, while the end user/audience does not face any artifacts
or delays hearing the watermarked audio file. Furthermore, it
overcomes the disadvantage of algorithms operating in the PCMData
domain to be vulnerable to compression/recompression attacks,
as it places the watermark in the scale factors domain and not in the
digitized sound audio data. The strength of our scheme, that allows it
to be used with success in both authentication and copyright
protection, relies on the fact that it gives to the users the enhanced
capability their ownership of the audio file not to be accomplished
simply by detecting the bit pattern that comprises the watermark
itself, but by showing that the legal owner knows a hard to compute
property of the watermark.
Abstract: In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Abstract: For many chemical and biological processes, the understanding of the mixing phenomenon and flow behavior in a stirred tank is of major importance. A three-dimensional numerical study was performed using the software Fluent, to study the flow field in a stirred tank with a Rushton turbine. In this work, we first studied the flow generated in the tank with a Rushton turbine. Then, we studied the effect of the variation of turbine’s submergence on the thermodynamic quantities defining the flow field. For that, four submergences were considered, while maintaining the same rotational speed (N =250rpm). This work intends to optimize the aeration performances of a Rushton turbine in a stirred tank.
Abstract: Truss spars are used for oil exploitation in deep and ultra-deep water if storage crude oil is not needed. The linear hydrodynamic analysis of truss spar in random sea wave load is necessary for determining the behaviour of truss spar. This understanding is not only important for design of the mooring lines, but also for optimising the truss spar design. In this paper linear hydrodynamic analysis of truss spar is carried out in frequency domain. The hydrodynamic forces are calculated using the modified Morison equation and diffraction theory. Added mass and drag coefficients of truss section computed by transmission matrix and normal acceleration and velocity component acting on each element and for hull section computed by strip theory. The stiffness properties of the truss spar can be separated into two components; hydrostatic stiffness and mooring line stiffness. Then, platform response amplitudes obtained by solved the equation of motion. This equation is non-linear due to viscous damping term therefore linearised by iteration method [1]. Finally computed RAOs and significant response amplitude and results are compared with experimental data.
Abstract: In production planning (PP) periods with excess capacity
and growing demand, the manufacturers have two options to use the excess capacity. First, it could do more changeovers and thus reduce lot sizes, inventories, and inventory costs. Second, it could produce in excess of demand in the period and build additional inventory that can be used to satisfy future demand increments, thus
delaying the purchase of the next machine that is required to meet the growth in demand. In this study we propose an enhanced supply
chain planning model with flexible planning capability. In addition, a 3D supply chain planning system is illustrated.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.