Abstract: In this study, mechanically alloyed Al 2024 powder is
densified by conventional sintering and by equal channel angular
pressing (ECAP) with and without back pressure. The powder was
encapsulated in an aluminium can for consolidation through ECAP.
The properties obtained in the compacts by conventional sintering
route and by ECAP are compared. The effect of conventional
sintering and ECAP on consolidation behaviour of powder,
microstructure, density and hardness is discussed. Room temperature
back pressure aided ECAP results in nearly full denser (97% of its
theoretical density) compact at room temperature. NanoIndentation
technique was used to determine the modulus of the consolidated
compacts.
Abstract: The aim of this article is to narrate the utility of novel simulation approach i.e. convolution method to predict blood concentration of drug utilizing dissolution data of salbutamol sulphate microparticulate formulations with different release patterns (1:1, 1:2 and 1:3, drug:polymer). Dissolution apparatus II USP 2007 and 900 ml double distilled water stirrd at 50 rpm was employed for dissolution analysis. From dissolution data, blood drug concentration was determined, and in return predicted blood drug concentration data was used to calculate the pharmacokinetic parameters i.e. Cmax, Tmax, and AUC. Convolution is a good biwaiver technique; however its better utility needs it application in the conditions where biorelevant dissolution media are used.
Abstract: Rooted in the study of social functioning of space in architecture, Space Syntax (SS) and the more recent Network Pattern (NP) researches demonstrate the 'spatial structures' of city, i.e. the hierarchical patterns of streets, junctions and alley ends. Applying SS and NP models, planners can conceptualize the real city-s patterns. Although, both models yield the optimal path of the city their underpinning displays of the city-s spatial configuration differ. The Axial Map analyzes the topological non-distance-based connectivity structure, whereas, the Central-Node Map and the Shortcut-Path Map, in contrast, analyze the metrical distance-based structures. This research contrasts and combines them to understand various forms of city-s structures. It concludes that, while they reveal different spatial structures, Space Syntax and Network Pattern urban models support each the other. Combining together they simulate the global access and the locally compact structures namely the central nodes and the shortcuts for the city.
Abstract: High performance Resistive Random Access Memory
(RRAM) based on HfOx has been prepared and its temperature
instability has been investigated in this work. With increasing
temperature, it is found that: leakage current at high resistance state
increases, which can be explained by the higher density of traps
inside dielectrics (related to trap-assistant tunneling), leading to a
smaller On/Off ratio; set and reset voltages decrease, which may be
attributed to the higher oxygen ion mobility, in addition to the
reduced potential barrier to create / recover oxygen ions (or oxygen
vacancies); temperature impact on the RRAM retention degradation
is more serious than electrical bias.
Abstract: The paper presents the Romanian realities and perspectives from the point of view of reaching the sustainable development model in the context of the recent accession to the European Union, based on the analysis of the indicators listed in the EU Sustainable Development Strategy. The analysis of the economic-social potential for sustainable development and of the environment aspects show that the objectives stipulated in the renewed EU Sustainable Development Strategy of 2006 can be reached, but an extra effort must be put-in in order to overcome the existing substantial gaps in several areas in relation to the developed countries of the EU. The paper-s conclusions show that even if sustainable development is not an easy target to reach in Romania, there are resources and a growing potential, which can lead to sustainable development if used rationally.
Abstract: Polymeric microreactors have emerged as a new
generation of carriers that hold tremendous promise in the areas of
cancer therapy, controlled delivery of drugs, for removal of
pollutants etc. Present work reports a simple and convenient
methodology for synthesis of polystyrene and poly caprolactone
microreactors. An aqueous suspension of carboxylated (1μm)
polystyrene latex particles was mixed with toluene solution followed
by freezing with liquid nitrogen. Freezed particles were incubated at
-20°C and characterized for formation of voids on the surface of
polymer microspheres by Field Emission Scanning Electron
Microscope. The hollow particles were then overnight incubated at
40ºC with unfunctionalized quantum dots (QDs) in 5:1 ratio. QDs
Encapsulated polystyrene microcapsules were characterized by
fluorescence microscopy.
Likewise Poly ε-caprolactone microreactors were prepared by
micro-volcanic rupture of freeze dried microspheres synthesized
using emulsification of polymer with aqueous Poly vinyl alcohol and
freezed with liquid nitrogen. Microreactors were examined with Field
Emission Scanning Electron Microscope for size and morphology.
Current study is an attempt to create hollow polymer particles which
can be employed for microencapsulation of nanoparticles and drug
molecules.
Abstract: The objective of the present research manuscript is to
perform parametric, nonparametric, and decision tree analysis to
evaluate two treatments that are being used for breast cancer patients.
Our study is based on utilizing real data which was initially used in
“Tamoxifen with or without breast irradiation in women of 50 years
of age or older with early breast cancer" [1], and the data is supplied
to us by N.A. Ibrahim “Decision tree for competing risks survival
probability in breast cancer study" [2]. We agree upon certain aspects
of our findings with the published results. However, in this
manuscript, we focus on relapse time of breast cancer patients instead
of survival time and parametric analysis instead of semi-parametric
decision tree analysis is applied to provide more precise
recommendations of effectiveness of the two treatments with respect
to reoccurrence of breast cancer.
Abstract: Circular tubes have been widely used as structural
members in engineering application. Therefore, its collapse behavior
has been studied for many decades, focusing on its energy absorption
characteristics. In order to predict the collapse behavior of members,
one could rely on the use of finite element codes or experiments.
These tools are helpful and high accuracy but costly and require
extensive running time. Therefore, an approximating model of tubes
collapse mechanism is an alternative for early step of design. This
paper is also aimed to develop a closed-form solution of thin-walled
circular tube subjected to bending. It has extended the Elchalakani et
al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the
rate of energy dissipation of rolling hinge in the circumferential
direction. The 3-D geometrical collapse mechanism was analyzed by
adding the oblique hinge lines along the longitudinal tube within the
length of plastically deforming zone. The model was based on the
principal of energy rate conservation. Therefore, the rates of internal
energy dissipation were calculated for each hinge lines which are
defined in term of velocity field. Inextensional deformation and
perfect plastic material behavior was assumed in the derivation of
deformation energy rate. The analytical result was compared with
experimental result. The experiment was conducted with a number of
tubes having various D/t ratios. Good agreement between analytical
and experiment was achieved.
Abstract: Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. AIMD (Additive Increase Multiplicative Decrease) is the best algorithm among the set of liner algorithms because it reflects good efficiency as well as good fairness. Our control model is based on the assumption of the original AIMD algorithm; we show that both efficiency and fairness of AIMD can be improved. We call our approach is New AIMD. We present experimental results with TCP that match the expectation of our theoretical analysis.
Abstract: Larval rearing and seed production of most of tetra fishes (Family: Characidae) is critical due to their small size larvae and limited numbers of spawning attempts. During the present study the effect of different live foods on growth and survival of neon tetra, Paracheirodon innesi larvae (length 3.1 ± 0.012mm, weight 0.048 ± 0.00015mg) and early fry (length = 6.44 ± 0.025mm, weight = 0.64 ± 0.003mg and 13 days old) was determined in two experiments. Experiment I was conducted for rearing the larvae by using mixed green water and Infusoria whereas, in Experiment II, early fry were fed with mixed zooplankton, decapsulated Artemia cyst and Artemia nauplii. The larvae fed on mixed green water showed significant (p
Abstract: A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.
Abstract: International literature emphasizes on the concern regarding the phenomenon of aggression in hospital. This paper focuses on the reality of aggressive interactions reigning within an emergency triage involving three chaps of protagonists: the professionals, the patients and their carers. The data collection was made from a grid of observation, in which the various variables exposed in the literature were integrated. They observations took place around the clock, for three weeks, at the rate of one week a month. In this research 331 aggressive interactions have been listed and analyzed by means of the software SPSS. This research is one of the very few continuous observation surveys in the literature. It shows the various human factors at play in the emergence of aggressive interaction. The data may be used both for taking steps in primary prevention, thanks to the analysis of interaction modes, and in secondary prevention by integrating the useful results in situational prevention.
Abstract: Automatic methods of detecting changes through
satellite imaging are the object of growing interest, especially
beca²use of numerous applications linked to analysis of the Earth’s
surface or the environment (monitoring vegetation, updating maps,
risk management, etc...). This work implemented spatial analysis
techniques by using images with different spatial and spectral
resolutions on different dates. The work was based on the principle
of control charts in order to set the upper and lower limits beyond
which a change would be noted. Later, the a contrario approach was
used. This was done by testing different thresholds for which the
difference calculated between two pixels was significant. Finally,
labeled images were considered, giving a particularly low difference
which meant that the number of “false changes” could be estimated
according to a given limit.
Abstract: Evolutionary Programming (EP) represents a
methodology of Evolutionary Algorithms (EA) in which mutation is
considered as a main reproduction operator. This paper presents a
novel EP approach for Artificial Neural Networks (ANN) learning.
The proposed strategy consists of two components: the self-adaptive,
which contains phenotype information and the dynamic, which is
described by genotype. Self-adaptation is achieved by the addition of
a value, called the network weight, which depends on a total number
of hidden layers and an average number of neurons in hidden layers.
The dynamic component changes its value depending on the fitness
of a chromosome, exposed to mutation. Thus, the mutation step size
is controlled by two components, encapsulated in the algorithm,
which adjust it according to the characteristics of a predefined ANN
architecture and the fitness of a particular chromosome. The
comparative analysis of the proposed approach and the classical EP
(Gaussian mutation) showed, that that the significant acceleration of
the evolution process is achieved by using both phenotype and
genotype information in the mutation strategy.
Abstract: A Web-services based grid infrastructure is evolving to be readily available in the near future. In this approach, the Web services are inherited (encapsulated or functioned) into the same existing Grid services class. In practice there is not much difference between the existing Web and grid infrastructure. Grid services emerged as stateful web services. In this paper, we present the key components of web-services based grid and also how the resource discovery is performed on web-services based grid considering resource discovery, as a critical service, to be provided by any type of grid.
Abstract: We developed a non-contact method for the in-situ
monitoring of the thermal forming of glass and Si foils to optimize
the manufacture of mirrors for high-resolution space x-ray
telescopes. Their construction requires precise and light-weight
segmented optics with angular resolution better than 5 arcsec. We
used 75x25 mm Desag D263 glass foils 0.75 mm thick and 0.6 mm
thick Si foils. The glass foils were shaped by free slumping on a
frame at viscosities in the range of 109.3-1012 dPa·s, the Si foils by
forced slumping above 1000°C. Using a Nikon D80 digital camera,
we took snapshots of a foil-s shape every 5 min during its isothermal
heat treatment. The obtained results we can use for computer
simulations. By comparing the measured and simulated data, we can
more precisely define material properties of the foils and optimize
the forming technology.
Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: The study of tourist activities and the mapping of their routes in space and time has become an important issue in tourism management. Here we represent space-time paths for the tourism industry by visualizing individual tourist activities and the paths followed using a 3D Geographic Information System (GIS). Considerable attention has been devoted to the measurement of accessibility to shopping, eating, walking and other services at the tourist destination. I turns out that GIS is a useful tool for studying the spatial behaviors of tourists in the area. The value of GIS is especially advantageous for space-time potential path area measures, especially for the accurate visualization of possible paths through existing city road networks. This study seeks to apply space-time concepts with a detailed street network map obtained from Google Maps to measure tourist paths both spatially and temporally. These paths are further determined based on data obtained from map questionnaires regarding the trip activities of 40 individuals. The analysis of the data makes it possible to determining the locations of the more popular paths. The results can be visualized using 3D GIS to show the areas and potential activity opportunities accessible to tourists during their travel time.
Abstract: Dengue disease is an infectious vector-borne viral
disease that is commonly found in tropical and sub-tropical regions,
especially in urban and semi-urban areas, around the world and
including Malaysia. There is no currently available vaccine or
chemotherapy for the prevention or treatment of dengue disease.
Therefore prevention and treatment of the disease depend on vector
surveillance and control measures. Disease risk mapping has been
recognized as an important tool in the prevention and control
strategies for diseases. The choice of statistical model used for
relative risk estimation is important as a good model will
subsequently produce a good disease risk map. Therefore, the aim of
this study is to estimate the relative risk for dengue disease based
initially on the most common statistic used in disease mapping called
Standardized Morbidity Ratio (SMR) and one of the earliest
applications of Bayesian methodology called Poisson-gamma model.
This paper begins by providing a review of the SMR method, which
we then apply to dengue data of Perak, Malaysia. We then fit an
extension of the SMR method, which is the Poisson-gamma model.
Both results are displayed and compared using graph, tables and
maps. Results of the analysis shows that the latter method gives a
better relative risk estimates compared with using the SMR. The
Poisson-gamma model has been demonstrated can overcome the
problem of SMR when there is no observed dengue cases in certain
regions. However, covariate adjustment in this model is difficult and
there is no possibility for allowing spatial correlation between risks in
adjacent areas. The drawbacks of this model have motivated many
researchers to propose other alternative methods for estimating the
risk.
Abstract: This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.