Abstract: Most of the losses in a power system relate to
the distribution sector which always has been considered.
From the important factors which contribute to increase losses
in the distribution system is the existence of radioactive flows.
The most common way to compensate the radioactive power
in the system is the power to use parallel capacitors. In
addition to reducing the losses, the advantages of capacitor
placement are the reduction of the losses in the release peak of
network capacity and improving the voltage profile. The point
which should be considered in capacitor placement is the
optimal placement and specification of the amount of the
capacitor in order to maximize the advantages of capacitor
placement.
In this paper, a new technique has been offered for the
placement and the specification of the amount of the constant
capacitors in the radius distribution network on the basis of
Genetic Algorithm (GA). The existing optimal methods for
capacitor placement are mostly including those which reduce
the losses and voltage profile simultaneously. But the
retaliation cost and load changes have not been considered as
influential UN the target function .In this article, a holistic
approach has been considered for the optimal response to this
problem which includes all the parameters in the distribution
network: The price of the phase voltage and load changes. So,
a vast inquiry is required for all the possible responses. So, in
this article, we use Genetic Algorithm (GA) as the most
powerful method for optimal inquiry.
Abstract: This research paper is based upon the simulation of
gradient of mathematical functions and scalar fields using MATLAB.
Scalar fields, their gradient, contours and mesh/surfaces are
simulated using different related MATLAB tools and commands for
convenient presentation and understanding. Different mathematical
functions and scalar fields are examined here by taking their
gradient, visualizing results in 3D with different color shadings and
using other necessary relevant commands. In this way the outputs of
required functions help us to analyze and understand in a better way
as compared to just theoretical study of gradient.
Abstract: The source voltage of high-power fuel cell shows strong load dependence at comparatively low voltage levels. In order to provide the voltage of 750V on the DC-link for feeding electrical energy into the mains via a three phase inverter a step-up converter with a large step-up ratio is required. The output voltage of this DC/DC-converter must be stabile during variations of the load current and the voltage of the fuel cell. This paper presents the methods and results of the calculation of the efficiency and the expense for the realization for the circuits of the DC/DC-converter that meet these requirements.
Abstract: One of the basic concepts in marketing is the concept
of meeting customers- needs. Since customer satisfaction is essential
for lasting survival and development of a business, screening and
observing customer satisfaction and recognizing its underlying
factors must be one of the key activities of every business.
The purpose of this study is to recognize the drivers that effect
customer satisfaction in a business-to-business situation in order to
improve marketing activities. We conducted a survey in which 93
business customers of a manufacturer of Diesel Generator in Iran
participated and they talked about their ideas and satisfaction of
supplier-s services related to its products. We developed the measures
for drivers of satisfaction first by as investigative research (by means
of feedback from executives and customers of sponsoring firm). Then
based on these measures, we created a mail survey, and asked the
respondents to explain their opinion about the sponsoring firm which
was a supplier of diesel generator and similar products. Furthermore,
the survey required the participants to mention their functional areas
and their company features.
In Conclusion we found that there are three drivers for customer
satisfaction, which are reliability, information about product, and
commercial features. Buyers/users from different functional areas
attribute different degree of importance to the last two drivers. For
instance, people from buying and management areas believe that
commercial features are more important than information about
products. But people in engineering, maintenance and production
areas believe that having information about products is more
important than commercial aspects. Marketing experts should
consider the attribute of customers regarding information about the
product and commercial features to improve market share.
Abstract: Following the laser ablation studies leading to a
theory of nuclei confinement by a Debye layer mechanism, we
present here numerical evaluations for the known stable nuclei where
the Coulomb repulsion is included as a rather minor component
especially for lager nuclei. In this research paper the required
physical conditions for the formation and stability of nuclei
particularly endothermic nuclei with mass number greater than to
which is an open astrophysical question have been investigated.
Using the Debye layer mechanism, nuclear surface energy, Fermi
energy and coulomb repulsion energy it is possible to find conditions
under which the process of nucleation is permitted in early universe.
Our numerical calculations indicate that about 200 second after the
big bang at temperature of about 100 KeV and subrelativistic region
with nucleon density nearly equal to normal nuclear density namely,
10cm all endothermic and exothermic nuclei have been
formed.
Abstract: Bone material is treated as heterogeneous and hierarchical in nature therefore appropriate size of bone specimen is required to analyze its tensile properties at a particular hierarchical level. Tensile properties of cortical bone are important to investigate the effect of drug treatment, disease and aging as well as for development of computational and analytical models. In the present study tensile properties of buffalo as well as goat femoral and tibiae cortical bone are analyzed using sub-size tensile specimens. Femoral cortical bone was found to be stronger in tension as compared to the tibiae cortical bone and the tensile properties obtained using sub-size specimens show close resemblance with the tensile properties of full-size cortical specimens. A two dimensional finite element (FE) modal was also applied to simulate the tensile behavior of sub-size specimens. Good agreement between experimental and FE model was obtained for sub-size tensile specimens of cortical bone.
Abstract: Manufacturing components of fiber-reinforced
thermoplastics requires three steps: heating the matrix, forming and
consolidation of the composite and terminal cooling the matrix. For
the heating process a pre-determined temperature distribution through
the layers and the thickness of the pre-consolidated sheets is
recommended to enable forming mechanism. Thus, a design for the
heating process for forming composites with thermoplastic matrices
is necessary. To obtain a constant temperature through thickness and
width of the sheet, the heating process was analyzed by the help of
the finite element method. The simulation models were validated by
experiments with resistance thermometers as well as with an infrared
camera. Based on the finite element simulation, heating methods for
infrared radiators have been developed. Using the numeric
simulation many iteration loops are required to determine the process
parameters. Hence, the initiation of a model for calculating relevant
process parameters started applying regression functions.
Abstract: This paper presents the experimental as well as the
simulated performance studies on the transcritical CO2 heat pumps
for simultaneous water cooling and heating; effects of water mass
flow rates and water inlet temperatures of both evaporator and gas
cooler on the cooling and heating capacities, system COP and water
outlets temperatures are investigated. Study shows that both the
water mass flow rate and inlet temperature have significant effect on
system performances. Test results show that the effect of evaporator
water mass flow rate on the system performances and water outlet
temperatures is more pronounced (COP increases 0.6 for 1 kg/min)
compared to the gas cooler water mass flow rate (COP increases 0.4
for 1 kg/min) and the effect of gas cooler water inlet temperature is
more significant (COP decreases 0.48 for given ranges) compared to
the evaporator water inlet temperature (COP increases 0.43 for given
ranges). Comparisons of experimental values with simulated results
show the maximum deviation of 5% for cooling capacity, 10% for
heating capacity, 16% for system COP. This study offers useful
guidelines for selecting appropriate water mass flow rate to obtain
required system performance.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: We introduce a novel approach to measuring how
humans learn based on techniques from information theory and
apply it to the oriental game of Go. We show that the total amount
of information observable in human strategies, called the strategic
information, remains constant for populations of players of differing
skill levels for well studied patterns of play. This is despite the very
large amount of knowledge required to progress from the recreational
players at one end of our spectrum to the very best and most
experienced players in the world at the other and is in contrast to
the idea that having more knowledge might imply more 'certainty'
in what move to play next. We show this is true for very local
up to medium sized board patterns, across a variety of different
moves using 80,000 game records. Consequences for theoretical and
practical AI are outlined.
Abstract: Reliability Centered Maintenance(RCM) is one of
most widely used methods in the modern power system to schedule a
maintenance cycle and determine the priority of inspection. In order
to apply the RCM method to the Smart Grid, a precedence study for
the new structure of rearranged system should be performed due to
introduction of additional installation such as renewable and
sustainable energy resources, energy storage devices and advanced
metering infrastructure. This paper proposes a new method to
evaluate the priority of maintenance and inspection of the power
system facilities in the Smart Grid using the Risk Priority Number. In
order to calculate that risk index, it is required that the reliability
block diagram should be analyzed for the Smart Grid system. Finally,
the feasible technical method is discussed to estimate the risk
potential as part of the RCM procedure.
Abstract: The oleaginous yeasts Lipomyces starkey were grown
in the presence of dairy industry wastewaters (DIW). The yeasts were
able to degrade the organic components of DIW and to produce a
significant fraction of their biomass as triglycerides.
When using DIW from the Ricotta cheese production or residual
whey as growth medium, the L. starkey could be cultured without
dilution nor external organic supplement. On the contrary, the yeasts
could only partially degrade the DIW from the Mozzarella cheese
production, due to the accumulation of a metabolic product beyond
the threshold of toxicity. In this case, a dilution of the DIW was
required to obtain a more efficient degradation of the carbon
compounds and an higher yield in oleaginous biomass.
The fatty acid distribution of the microbial oils obtained showed a
prevalence of oleic acid, and is compatible with the production of a II
generation biodiesel offering a good resistance to oxidation as well as
an excellent cold-performance.
Abstract: The lack of security obstructs a large scale de- ployment of the multicast communication model. There- fore, a host of research works have been achieved in order to deal with several issues relating to securing the multicast, such as confidentiality, authentication, non-repudiation, in- tegrity and access control. Many applications require au- thenticating the source of the received traffic, such as broadcasting stock quotes and videoconferencing and hence source authentication is a required component in the whole multicast security architecture. In this paper, we propose a new and efficient source au- thentication protocol which guarantees non-repudiation for multicast flows, and tolerates packet loss. We have simu- lated our protocol using NS-2, and the simulation results show that the protocol allows to achieve improvements over protocols fitting into the same category.
Abstract: Neural processors have shown good results for
detecting a certain character in a given input matrix. In this paper, a
new idead to speed up the operation of neural processors for character
detection is presented. Such processors are designed based on cross
correlation in the frequency domain between the input matrix and the
weights of neural networks. This approach is developed to reduce the
computation steps required by these faster neural networks for the
searching process. The principle of divide and conquer strategy is
applied through image decomposition. Each image is divided into
small in size sub-images and then each one is tested separately by
using a single faster neural processor. Furthermore, faster character
detection is obtained by using parallel processing techniques to test the
resulting sub-images at the same time using the same number of faster
neural networks. In contrast to using only faster neural processors, the
speed up ratio is increased with the size of the input image when using
faster neural processors and image decomposition. Moreover, the
problem of local subimage normalization in the frequency domain is
solved. The effect of image normalization on the speed up ratio of
character detection is discussed. Simulation results show that local
subimage normalization through weight normalization is faster than
subimage normalization in the spatial domain. The overall speed up
ratio of the detection process is increased as the normalization of
weights is done off line.
Abstract: This paper discusses about an intelligent system to be
installed in ambulances providing professional support to the paramedics on board. A video conferencing device over mobile 4G services enables specialists virtually attending the patient being transferred to the hospital. The data centre holds detailed databases
on the patients past medical history and hospitals with the specialists. It also hosts various software modules that compute the shortest traffic –less path to the closest hospital with the required facilities, on inputting the symptoms of the patient, on a real time basis.
Abstract: The evaluation of conversational agents or chatterbots question answering systems is a major research area that needs much attention. Before the rise of domain-oriented conversational agents based on natural language understanding and reasoning, evaluation is never a problem as information retrieval-based metrics are readily available for use. However, when chatterbots began to become more domain specific, evaluation becomes a real issue. This is especially true when understanding and reasoning is required to cater for a wider variety of questions and at the same time to achieve high quality responses. This paper discusses the inappropriateness of the existing measures for response quality evaluation and the call for new standard measures and related considerations are brought forward. As a short-term solution for evaluating response quality of conversational agents, and to demonstrate the challenges in evaluating systems of different nature, this research proposes a blackbox approach using observation, classification scheme and a scoring mechanism to assess and rank three example systems, AnswerBus, START and AINI.
Abstract: The antioxidant compounds are needed for the food, beverages, and pharmaceuticals industry. For this purpose, an appropriate method is required to measure the antioxidant properties in various types of samples. Spectrophotometric method usually used has some weaknesses, including the high price, long sample preparation time, and less sensitivity. Among the alternative methods developed to overcome these weaknesses is antioxidant biosensor based on superoxide dismutase (SOD) enzyme. Therefore, this study was carried out to measure the SOD activity originating from Deinococcus radiodurans and to determine its kinetics properties. Carbon paste electrode modified with ferrocene and immobilized SOD exhibited anode and cathode current peak at potential of +400 and +300mv respectively, in both pure SOD and SOD of D. radiodurans. This indicated that the current generated was from superoxide catalytic dismutation reaction by SOD. Optimum conditions for SOD activity was at pH 9 and temperature of 27.50C for D. radiodurans SOD, and pH 11 and temperature of 200C for pure SOD. Dismutation reaction kinetics of superoxide catalyzed by SOD followed the Lineweaver-Burk kinetics with D. radiodurans SOD KMapp value was smaller than pure SOD. The result showed that D. radiodurans SOD had higher enzyme-substrate affinity and specificity than pure SOD. It concluded that D. radiodurans SOD had a great potential as biological recognition component for antioxidant biosensor.
Abstract: Intelligent systems are required in order to quickly and accurately analyze enormous quantities of data in the Internet environment. In intelligent systems, information extracting processes can be divided into supervised learning and unsupervised learning. This paper investigates intelligent clustering by unsupervised learning. Intelligent clustering is the clustering system which determines the clustering model for data analysis and evaluates results by itself. This system can make a clustering model more rapidly, objectively and accurately than an analyzer. The methodology for the automatic clustering intelligent system is a multi-agent system that comprises a clustering agent and a cluster performance evaluation agent. An agent exchanges information about clusters with another agent and the system determines the optimal cluster number through this information. Experiments using data sets in the UCI Machine Repository are performed in order to prove the validity of the system.
Abstract: Like any sentient organism, a smart environment
relies first and foremost on sensory data captured from the real
world. The sensory data come from sensor nodes of different
modalities deployed on different locations forming a Wireless Sensor
Network (WSN). Embedding smart sensors in humans has been a
research challenge due to the limitations imposed by these sensors
from computational capabilities to limited power. In this paper, we
first propose a practical WSN application that will enable blind
people to see what their neighboring partners can see. The challenge
is that the actual mapping between the input images to brain pattern
is too complex and not well understood. We also study the
connectivity problem in 3D/2D wireless sensor networks and propose
distributed efficient algorithms to accomplish the required
connectivity of the system. We provide a new connectivity algorithm
CDCA to connect disconnected parts of a network using cooperative
diversity. Through simulations, we analyze the connectivity gains
and energy savings provided by this novel form of cooperative
diversity in WSNs.
Abstract: This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.