Abstract: The Time-Domain Boundary Element Method (TDBEM)
is a well known numerical technique that handles quite
properly dynamic analyses considering infinite dimension media.
However, when these analyses are also related to nonlinear behavior,
very complex numerical procedures arise considering the TD-BEM,
which may turn its application prohibitive. In order to avoid this
drawback and model nonlinear infinite media, the present work
couples two BEM formulations, aiming to achieve the best of two
worlds. In this context, the regions expected to behave nonlinearly
are discretized by the Domain Boundary Element Method (D-BEM),
which has a simpler mathematical formulation but is unable to deal
with infinite domain analyses; the TD-BEM is employed as in the
sense of an effective non-reflexive boundary. An iterative procedure
is considered for the coupling of the TD-BEM and D-BEM, which is
based on a relaxed renew of the variables at the common interfaces.
Elastoplastic models are focused and different time-steps are allowed
to be considered by each BEM formulation in the coupled analysis.
Abstract: In this paper, we introduce an mobile agent framework
with proactive load balancing for ambient intelligence (AmI) environments.
One of the main obstacles of AmI is the scalability in
which the openness of AmI environment introduces dynamic resource
requirements on agencies. To mediate this scalability problem, our
framework proposes a load balancing module to proactively analyze
the resource consumption of network bandwidth and preferred agencies
to suggest the optimal communication method to its user. The
framework generally formulates an AmI environment that consists
of three main components: (1) mobile devices, (2) hosts or agencies,
and (3) directory service center (DSC). A preliminary implementation
was conducted with NetLogo and the experimental results show that
the proposed approach provides enhanced system performance by
minimizing the network utilization to provide users with responsive
services.
Abstract: This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Abstract: Reducing energy consumption of embedded systems requires careful memory management. It has been shown that Scratch- Pad Memories (SPMs) are low size, low cost, efficient (i.e. energy saving) data structures directly managed at the software level. In this paper, the focus is on heuristic methods for SPMs management. A method is efficient if the number of accesses to SPM is as large as possible and if all available space (i.e. bits) is used. A Tabu Search (TS) approach for memory management is proposed which is, to the best of our knowledge, a new original alternative to the best known existing heuristic (BEH). In fact, experimentations performed on benchmarks show that the Tabu Search method is as efficient as BEH (in terms of energy consumption) but BEH requires a sorting which can be computationally expensive for a large amount of data. TS is easy to implement and since no sorting is necessary, unlike BEH, the corresponding sorting time is saved. In addition to that, in a dynamic perspective where the maximum capacity of the SPM is not known in advance, the TS heuristic will perform better than BEH.
Abstract: The various types of frequent pattern discovery
problem, namely, the frequent itemset, sequence and graph mining
problems are solved in different ways which are, however, in certain
aspects similar. The main approach of discovering such patterns can
be classified into two main classes, namely, in the class of the levelwise
methods and in that of the database projection-based methods.
The level-wise algorithms use in general clever indexing structures
for discovering the patterns. In this paper a new approach is proposed
for discovering frequent sequences and tree-like patterns efficiently
that is based on the level-wise issue. Because the level-wise
algorithms spend a lot of time for the subpattern testing problem, the
new approach introduces the idea of using automaton theory to solve
this problem.
Abstract: This paper presents an algorithm of particle swarm
optimization with reduction for global optimization problems. Particle
swarm optimization is an algorithm which refers to the collective
motion such as birds or fishes, and a multi-point search algorithm
which finds a best solution using multiple particles. Particle
swarm optimization is so flexible that it can adapt to a number
of optimization problems. When an objective function has a lot of
local minimums complicatedly, the particle may fall into a local
minimum. For avoiding the local minimum, a number of particles are
initially prepared and their positions are updated by particle swarm
optimization. Particles sequentially reduce to reach a predetermined
number of them grounded in evaluation value and particle swarm
optimization continues until the termination condition is met. In order
to show the effectiveness of the proposed algorithm, we examine the
minimum by using test functions compared to existing algorithms.
Furthermore the influence of best value on the initial number of
particles for our algorithm is discussed.
Abstract: The occurrence and removal of trace organic
contaminants in the aquatic environment has become a focus of
environmental concern. For the selective removal of carbamazepine
from loaded waters molecularly imprinted polymers (MIPs) were
synthesized with carbamazepine as template. Parameters varied were
the type of monomer, crosslinker, and porogen, the ratio of starting
materials, and the synthesis temperature. Best results were obtained
with a template to crosslinker ratio of 1:20, toluene as porogen, and
methacrylic acid (MAA) as monomer. MIPs were then capable to
recover carbamazepine by 93% from a 10-5 M landfill leachate
solution containing also caffeine and salicylic acid. By comparison,
carbamazepine recoveries of 75% were achieved using a nonimprinted
polymer (NIP) synthesized under the same conditions, but
without template. In landfill leachate containing solutions
carbamazepine was adsorbed by 93-96% compared with an uptake of
73% by activated carbon. The best solvent for desorption was
acetonitrile, with which the amount of solvent necessary and dilution
with water was tested. Selected MIPs were tested for their reusability
and showed good results for at least five cycles. Adsorption
isotherms were prepared with carbamazepine solutions in the
concentration range of 0.01 M to 5*10-6 M. The heterogeneity index
showed a more homogenous binding site distribution.
Abstract: The paper explores the development of an optimization of method and apparatus for retrieving extended high dynamic range from digital negative image. Architectural photo imaging can benefit from high dynamic range imaging (HDRI) technique for preserving and presenting sufficient luminance in the shadow and highlight clipping image areas. The HDRI technique that requires multiple exposure images as the source of HDRI rendering may not be effective in terms of time efficiency during the acquisition process and post-processing stage, considering it has numerous potential imaging variables and technical limitations during the multiple exposure process. This paper explores an experimental method and apparatus that aims to expand the dynamic range from digital negative image in HDRI environment. The method and apparatus explored is based on a single source of RAW image acquisition for the use of HDRI post-processing. It will cater the optimization in order to avoid and minimize the conventional HDRI photographic errors caused by different physical conditions during the photographing process and the misalignment of multiple exposed image sequences. The study observes the characteristics and capabilities of RAW image format as digital negative used for the retrieval of extended high dynamic range process in HDRI environment.
Abstract: Ever since industrial revolution began, our ecosystem
has changed. And indeed, the negatives outweigh the positives.
Industrial waste usually released into all kinds of body of water, such
as river or sea. Tempeh waste is one example of waste that carries
many hazardous and unwanted substances that will affect the
surrounding environment. Tempeh is a popular fermented food in
Asia which is rich in nutrients and active substances. Tempeh liquid
waste- in particular- can cause an air pollution, and if penetrates
through the soil, it will contaminates ground-water, making it
unavailable for the water to be consumed. Moreover, bacteria will
thrive within the polluted water, which often responsible for causing
many kinds of diseases. The treatment used for this chemical waste is
biological treatment such as constructed wetland and activated
sludge. These kinds of treatment are able to reduce both physical and
chemical parameters altogether such as temperature, TSS, pH, BOD,
COD, NH3-N, NO3-N, and PO4-P. These treatments are implemented
before the waste is released into the water. The result is a
comparation between constructed wetland and activated sludge,
along with determining which method is better suited to reduce the
physical and chemical subtances of the waste.
Abstract: This paper addresses the fundamental requirements for
starting an online business. It covers the process of ideation,
conceptualization, formulation, and implementation of new venture
ideas on the Web. Using Facebook as an illustrative example, we learn
how to turn an idea into a successful electronic business and to execute
a business plan with IT skills, management expertise, a good
entrepreneurial attitude, and an understanding of Internet culture. The
personality traits and characteristics of a successful e-commerce
entrepreneur are discussed with reference to Facebook-s founder,
Mark Zuckerberg. Facebook is a social and e-commerce success. It
provides a trusted environment of which participants can conduct
business with social experience. People are able to discuss products
before, during the after the sale within the Facebook environment. The
paper also highlights the challenges and opportunities for e-commerce
entrepreneurial startups to go public and of entering the China market.
Abstract: This article demonstrated development of
controlled release system of an NSAID drug, Diclofenac
sodium employing different ratios of Ethyl cellulose.
Diclofenac sodium and ethyl cellulose in different proportions
were processed by microencapsulation based on phase
separation technique to formulate microcapsules. The
prepared microcapsules were then compressed into tablets to
obtain controlled release oral formulations. In-vitro evaluation
was performed by dissolution test of each preparation was
conducted in 900 ml of phosphate buffer solution of pH 7.2
maintained at 37 ± 0.5 °C and stirred at 50 rpm. At predetermined
time intervals (0, 0.5, 1.0, 1.5, 2, 3, 4, 6, 8, 10, 12,
16, 20 and 24 hrs). The drug concentration in the collected
samples was determined by UV spectrophotometer at 276 nm.
The physical characteristics of diclofenac sodium
microcapsules were according to accepted range. These were
off-white, free flowing and spherical in shape. The release
profile of diclofenac sodium from microcapsules was found to
be directly proportional to the proportion of ethylcellulose and
coat thickness. The in-vitro release pattern showed that with
ratio of 1:1 and 1:2 (drug: polymer), the percentage release of
drug at first hour was 16.91 and 11.52 %, respectively as
compared to 1:3 which is only 6.87 % with in this time. The
release mechanism followed higuchi model for its release
pattern. Tablet Formulation (F2) of present study was found
comparable in release profile the marketed brand Phlogin-SR,
microcapsules showed an extended release beyond 24 h.
Further, a good correlation was found between drug release
and proportion of ethylcellulose in the microcapsules.
Microencapsulation based on coacervation found as good
technique to control release of diclofenac sodium for making
the controlled release formulations.
Abstract: A homologous series of aromatic esters, 4-nalkanoyloxybenzylidene-
4--bromoanilines, nABBA,
consisting of two 1,4-disubstituted phenyl cores and a Schiff
base central linkage was synthesized. All the members can be
differed by the number of carbon atoms at terminal
alkanoyloxy chain (CnH2n-1COO-, n = 2, 6, 18). The molecular
structure of nABBA was confirmed with infrared
spectroscopy, nuclear magnetic resonance (NMR)
spectroscopy and electron-ionization mass (EI-MS)
spectrometry. Mesomorphic properties were studied using
differential scanning calorimetry and polarizing optical
microscopy.
Abstract: This paper considers the robust exponential stability issues for a class of uncertain switched neutral system which delays switched according to the switching rule. The system under consideration includes both stable and unstable subsystems. The uncertainties considered in this paper are norm bounded, and possibly time varying. Based on multiple Lyapunov functional approach and dwell-time technique, the time-dependent switching rule is designed depend on the so-called average dwell time of stable subsystems as well as the ratio of the total activation time of stable subsystems and unstable subsystems. It is shown that by suitably controlling the switching between the stable and unstable modes, the robust stabilization of the switched uncertain neutral systems can be achieved. Two simulation examples are given to demonstrate the effectiveness of the proposed method.
Abstract: one of the significant factors for improving the
accuracy of Land Surface Temperature (LST) retrieval is the correct
understanding of the directional anisotropy for thermal radiance. In
this paper, the multiple scattering effect between heterogeneous
non-isothermal surfaces is described rigorously according to the
concept of configuration factor, based on which a directional thermal
radiance model is built, and the directional radiant character for urban
canopy is analyzed. The model is applied to a simple urban canopy
with row structure to simulate the change of Directional Brightness
Temperature (DBT). The results show that the DBT is aggrandized
because of the multiple scattering effects, whereas the change range of
DBT is smoothed. The temperature difference, spatial distribution,
emissivity of the components can all lead to the change of DBT. The
“hot spot" phenomenon occurs when the proportion of high
temperature component in the vision field came to a head. On the other
hand, the “cool spot" phenomena occur when low temperature
proportion came to the head. The “spot" effect disappears only when
the proportion of every component keeps invariability. The model
built in this paper can be used for the study of directional effect on
emissivity, the LST retrieval over urban areas and the adjacency effect
of thermal remote sensing pixels.
Abstract: In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.
Abstract: The design of a landing gear is one of the fundamental aspects of aircraft design. The need for a light weight, high strength, and stiffness characteristics coupled with techno economic feasibility are a key to the acceptability of any landing gear construction. In this paper, an approach for analyzing two different designed landing gears for an unmanned aircraft vehicle (UAV) using advanced CAE techniques will be applied. Different landing conditions have been considered for both models. The maximum principle stresses for each model along with the factor of safety are calculated for every loading condition. A conclusion is drawing about better geometry.
Abstract: The ability of agricultural and decorative plants to
absorb and detoxify TNT and RDX has been studied. All tested 8
plants, grown hydroponically, were able to absorb these explosives
from water solutions: Alfalfa > Soybean > Chickpea> Chikling vetch
>Ryegrass > Mung bean> China bean > Maize. Differently from
TNT, RDX did not exhibit negative influence on seed germination
and plant growth. Moreover, some plants, exposed to RDX
containing solution were increased in their biomass by 20%. Study of
the fate of absorbed [1-14ðí]-TNT revealed the label distribution in
low and high-molecular mass compounds, both in roots and above
ground parts of plants, prevailing in the later. Content of 14ðí in lowmolecular
compounds in plant roots are much higher than in above
ground parts. On the contrary, high-molecular compounds are more
intensively labeled in aboveground parts of soybean. Most part (up to
70%) of metabolites of TNT, formed either by enzymatic reduction
or oxidation, is found in high molecular insoluble conjugates.
Activation of enzymes, responsible for reduction, oxidation and
conjugation of TNT, such as nitroreductase, peroxidase,
phenoloxidase and glutathione S-transferase has been demonstrated.
Among these enzymes, only nitroreductase was shown to be induced
in alfalfa, exposed to RDX. The increase in malate dehydrogenase
activities in plants, exposed to both explosives, indicates
intensification of Tricarboxylic Acid Cycle, that generates reduced
equivalents of NAD(P)H, necessary for functioning of the
nitroreductase. The hypothetic scheme of TNT metabolism in plants
is proposed.
Abstract: Stipples are desired for pattern fillings and
transparency effects. In contrast, some graphics standards, including
OpenGL ES 1.1 and 2.0, omitted this feature. We represent details of
providing line stipples and polygon stipples, through combining
texture mapping and alpha blending functions. We start from the
OpenGL-specified stipple-related API functions. The details of
mathematical transformations are explained to get the correct texture
coordinates. Then, the overall algorithm is represented, and its
implementation results are followed. We accomplished both of line
and polygon stipples, and verified its result with conformance test
routines.
Abstract: This paper explores the sense of place in the Vredefort Dome World Heritage site, South Africa, as an essential input for the formulation of spatial planning proposals for the area. Intangible aspects such as personal and symbolic meanings of sites are currently not integrated in spatial planning in South Africa. This may have a detrimental effect on local inhabitants who have a long history with the site and built up a strong place identity. Involving local inhabitants at an early stage of the planning process and incorporating their attitudes and opinions in future intervention in the area, may also contribute to the acceptance of the legitimacy of future policy. An interdisciplinary and mixed-method research approach was followed in this study in order to identify possible ways to anchor spatial planning proposals in the identity of the place. In essence, the qualitative study revealed that inhabitants reflect a deep and personal relationship with and within the area, which contributes significantly to their sense of emotional security and selfidentity. Results include a strong conservation-orientated attitude with regard to the natural rural character of the site, especially in the inner core.
Abstract: This paper presents the comparative study of coded
data methods for finding the benefit of concealing the natural data
which is the mercantile secret. Influential parameters of the number
of replicates (rep), treatment effects (τ) and standard deviation (σ)
against the efficiency of each transformation method are investigated.
The experimental data are generated via computer simulations under
the specified condition of the process with the completely
randomized design (CRD). Three ways of data transformation consist
of Box-Cox, arcsine and logit methods. The difference values of F
statistic between coded data and natural data (Fc-Fn) and hypothesis
testing results were determined. The experimental results indicate
that the Box-Cox results are significantly different from natural data
in cases of smaller levels of replicates and seem to be improper when
the parameter of minus lambda has been assigned. On the other hand,
arcsine and logit transformations are more robust and obviously,
provide more precise numerical results. In addition, the alternate
ways to select the lambda in the power transformation are also
offered to achieve much more appropriate outcomes.