Abstract: The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.
Abstract: How to coordinate the behaviors of the agents through
learning is a challenging problem within multi-agent domains.
Because of its complexity, recent work has focused on how
coordinated strategies can be learned. Here we are interested in using
reinforcement learning techniques to learn the coordinated actions of a
group of agents, without requiring explicit communication among
them. However, traditional reinforcement learning methods are based
on the assumption that the environment can be modeled as Markov
Decision Process, which usually cannot be satisfied when multiple
agents coexist in the same environment. Moreover, to effectively
coordinate each agent-s behavior so as to achieve the goal, it-s
necessary to augment the state of each agent with the information
about other existing agents. Whereas, as the number of agents in a
multiagent environment increases, the state space of each agent grows
exponentially, which will cause the combinational explosion problem.
Profit sharing is one of the reinforcement learning methods that allow
agents to learn effective behaviors from their experiences even within
non-Markovian environments. In this paper, to remedy the drawback
of the original profit sharing approach that needs much memory to
store each state-action pair during the learning process, we firstly
address a kind of on-line rational profit sharing algorithm. Then, we
integrate the advantages of modular learning architecture with on-line
rational profit sharing algorithm, and propose a new modular
reinforcement learning model. The effectiveness of the technique is
demonstrated using the pursuit problem.
Abstract: A microchannel with two inlets and two outlets was tested as a potential reactor to carry out two-phase catalytic phase transfer reaction with phase separation at the exit of the microchannel. The catalytic phase transfer reaction between benzyl chloride and sodium sulfide was chosen as a model reaction. The effect of operational time on the conversion was studied. By utilizing a multiphase parallel flow inside the microchannel reactor with the aid of a guideline structure, the catalytic phase reaction followed by phase separation could be ensured. The organic phase could be separated completely from one exit and part of the aqueous phase was separated purely and could be reused with slightly affecting the catalytic phase transfer reaction.
Abstract: Long Term Evolution (LTE) is a 4G wireless
broadband technology developed by the Third Generation
Partnership Project (3GPP) release 8, and it's represent the
competitiveness of Universal Mobile Telecommunications System
(UMTS) for the next 10 years and beyond. The concepts for LTE
systems have been introduced in 3GPP release 8, with objective of
high-data-rate, low-latency and packet-optimized radio access
technology. In this paper, performance of different TCP variants
during LTE network investigated. The performance of TCP over
LTE is affected mostly by the links of the wired network and total
bandwidth available at the serving base station. This paper describes
an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno,
TCP-Newreno, TCP-SACK, and TCP-FACK, with full
modeling of all traffics of LTE system. The Evaluation of the
network performance with all TCP variants is mainly based on
throughput, average delay and lost packet. The analysis of TCP
performance over LTE ensures that all TCP's have a similar
throughput and the best performance return to TCP-Vegas than other
variants.
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Banishing hunger from the face of earth has been
frequently expressed in various international, national and regional
level conferences since 1974. Providing food security has become
important issue across the world particularly in developing countries.
In a developing country like India, where growth rate of population is
more than that of the food grains production, food security is a
question of great concern. According to the International Food Policy
Research Institute's Global Hunger Index, 2011, India ranks 67 of the
81 countries of the world with the worst food security status. After
Green Revolution, India became a food surplus country. Its
production has increased from 74.23 million tonnes in 1966-67 to
257.44 million tonnes in 2011-12. But after achieving selfsufficiency
in food during last three decades, the country is now
facing new challenges due to increasing population, climate change,
stagnation in farm productivity. Therefore, the main objective of the
present paper is to examine the food security situation at national
level in the country and further to explain the paradox of food
insecurity in a food surplus state of India i.e in Punjab at micro level.
In order to achieve the said objectives, secondary data collected from
the Ministry of Agriculture and the Agriculture department of Punjab
State was analyzed. The result of the study showed that despite
having surplus food production the country is still facing food
insecurity problem at micro level. Within the Kandi belt of Punjab
state, the area adjacent to plains is food secure while the area along
the hills falls in food insecure zone.
The present paper is divided into following three sections (i)
Introduction, (ii) Analysis of food security situation at national level
as well as micro level (Kandi belt of Punjab State) (iii) Concluding
Observations
Abstract: Gene expression profiling is rapidly evolving into a
powerful technique for investigating tumor malignancies. The
researchers are overwhelmed with the microarray-based platforms
and methods that confer them the freedom to conduct large-scale
gene expression profiling measurements. Simultaneously,
investigations into cross-platform integration methods have started
gaining momentum due to their underlying potential to help
comprehend a myriad of broad biological issues in tumor diagnosis,
prognosis, and therapy. However, comparing results from different
platforms remains to be a challenging task as various inherent
technical differences exist between the microarray platforms. In this
paper, we explain a simple ratio-transformation method, which can
provide some common ground for cDNA and Affymetrix platform
towards cross-platform integration. The method is based on the
characteristic data attributes of Affymetrix- and cDNA- platform. In
the work, we considered seven childhood leukemia patients and their
gene expression levels in either platform. With a dataset of 822
differentially expressed genes from both these platforms, we carried
out a specific ratio-treatment to Affymetrix data, which subsequently
showed an improvement in the relationship with the cDNA data.
Abstract: In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.
Abstract: Efficient preprocessing is very essential for automatic
recognition of handwritten documents. In this paper, techniques on
segmenting words in handwritten Arabic text are presented. Firstly,
connected components (ccs) are extracted, and distances among
different components are analyzed. The statistical distribution of this
distance is then obtained to determine an optimal threshold for words
segmentation. Meanwhile, an improved projection based method is
also employed for baseline detection. The proposed method has been
successfully tested on IFN/ENIT database consisting of 26459
Arabic words handwritten by 411 different writers, and the results
were promising and very encouraging in more accurate detection of
the baseline and segmentation of words for further recognition.
Abstract: This paper concerns about the experimental and
numerical investigations of energy absorption and axial tearing
behaviour of aluminium 6060 circular thin walled tubes under static
axial compression. The tubes are received in T66 heat treatment
condition with fixed outer diameter of 42mm, thickness of 1.5mm
and length of 120mm. The primary variables are the conical die
angles (15°, 20° and 25°). Numerical simulations are carried on
ANSYS/LS-DYNA software tool, for investigating the effect of
friction between the tube and the die.
Abstract: Perth will run out of available sustainable natural
water resources by 2015 if nothing is done to slow usage rates,
according to a Western Australian study [1]. Alternative water
technology options need to be considered for the long-term
guaranteed supply of water for agricultural, commercial, domestic
and industrial purposes. Seawater is an alternative source of water for
human consumption, because seawater can be desalinated and
supplied in large quantities to a very high quality.
While seawater desalination is a promising option, the technology
requires a large amount of energy which is typically generated from
fossil fuels. The combustion of fossil fuels emits greenhouse gases
(GHG) and, is implicated in climate change. In addition to
environmental emissions from electricity generation for desalination,
greenhouse gases are emitted in the production of chemicals and
membranes for water treatment. Since Australia is a signatory to the
Kyoto Protocol, it is important to quantify greenhouse gas emissions
from desalinated water production.
A life cycle assessment (LCA) has been carried out to determine
the greenhouse gas emissions from the production of 1 gigalitre (GL)
of water from the new plant. In this LCA analysis, a new desalination
plant that will be installed in Bunbury, Western Australia, and known
as Southern Seawater Desalinization Plant (SSDP), was taken as a
case study. The system boundary of the LCA mainly consists of three
stages: seawater extraction, treatment and delivery. The analysis
found that the equivalent of 3,890 tonnes of CO2 could be emitted
from the production of 1 GL of desalinated water. This LCA analysis
has also identified that the reverse osmosis process would cause the
most significant greenhouse emissions as a result of the electricity
used if this is generated from fossil fuels
Abstract: The aim of this paper is to investigate the influence of
market share and diversification on the nonlife insurers- performance.
The underlying relationships have been investigated in different
industries and different disciplines (economics, management...), still,
no consistency exists either in the magnitude or statistical
significance of the relationship between market share (and
diversification as well) on one side and companies- performance on
the other side. Moreover, the direction of the relationship is also
somewhat questionable. While some authors find this relationship to
be positive, the others reveal its negative association. In order to test
the influence of market share and diversification on companies-
performance in Croatian nonlife insurance industry for the period
from 1999 to 2009, we designed an empirical model in which we
included the following independent variables: firms- profitability
from previous years, market share, diversification and control
variables (i.e. ownership, industrial concentration, GDP per capita,
inflation). Using the two-step generalized method of moments
(GMM) estimator we found evidence of a positive and statistically
significant influence of both, market share and diversification, on
insurers- profitability.
Abstract: The resistive-inductive-capacitive behavior of long
interconnects which are driven by CMOS gates are presented in this
paper. The analysis is based on the ¤Ç-model of a RLC load and is
developed for submicron devices. Accurate and analytical
expressions for the output load voltage, the propagation delay and the
short circuit power dissipation have been proposed after solving a
system of differential equations which accurately describe the
behavior of the circuit. The effect of coupling capacitance between
input and output and the short circuit current on these performance
parameters are also incorporated in the proposed model. The
estimated proposed delay and short circuit power dissipation are in
very good agreement with the SPICE simulation with average
relative error less than 6%.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: Prediction of viscosity of natural gas is an important parameter in the energy industries such as natural gas storage and transportation. In this study viscosity of different compositions of natural gas is modeled by using an artificial neural network (ANN) based on back-propagation method. A reliable database including more than 3841 experimental data of viscosity for testing and training of ANN is used. The designed neural network can predict the natural gas viscosity using pseudo-reduced pressure and pseudo-reduced temperature with AARD% of 0.221. The accuracy of designed ANN has been compared to other published empirical models. The comparison indicates that the proposed method can provide accurate results.
Abstract: Network management techniques have long been of
interest to the networking research community. The queue size plays
a critical role for the network performance. The adequate size of the
queue maintains Quality of Service (QoS) requirements within
limited network capacity for as many users as possible. The
appropriate estimation of the queuing model parameters is crucial for
both initial size estimation and during the process of resource
allocation. The accurate resource allocation model for the
management system increases the network utilization. The present
paper demonstrates the results of empirical observation of memory
allocation for packet-based services.
Abstract: Light is one of the most important qualitative and
symbolic factors and has a special position in architecture and urban
development in regard to practical function. The main function of
light, either natural or artificial, is lighting up the environment and
the constructional forms which is called lighting. However, light is
used to redefine the urban spaces by architectural genius with regard
to three aesthetic, conceptual and symbolic factors. In architecture
and urban development, light has a function beyond lighting up the
environment, and the designers consider it as one of the basic
components. The present research aims at studying the function of
light and color in architectural view and their effects in buildings.
Abstract: Cross sections of As radionuclides in the interaction of natGe with 14-30 MeV protons have been deduced by off-line y-ray spectroscopy to find optimal reaction channels leading to radiotracers for positron emission tomography. The experimental results were compared with the previous results and those estimated by the compound nucleus reaction model.
Abstract: The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.
Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.