Abstract: The speech signal conveys information about the
identity of the speaker. The area of speaker identification is
concerned with extracting the identity of the person speaking the
utterance. As speech interaction with computers becomes more
pervasive in activities such as the telephone, financial transactions
and information retrieval from speech databases, the utility of
automatically identifying a speaker is based solely on vocal
characteristic. This paper emphasizes on text dependent speaker
identification, which deals with detecting a particular speaker from a
known population. The system prompts the user to provide speech
utterance. System identifies the user by comparing the codebook of
speech utterance with those of the stored in the database and lists,
which contain the most likely speakers, could have given that speech
utterance. The speech signal is recorded for N speakers further the
features are extracted. Feature extraction is done by means of LPC
coefficients, calculating AMDF, and DFT. The neural network is
trained by applying these features as input parameters. The features
are stored in templates for further comparison. The features for the
speaker who has to be identified are extracted and compared with the
stored templates using Back Propogation Algorithm. Here, the
trained network corresponds to the output; the input is the extracted
features of the speaker to be identified. The network does the weight
adjustment and the best match is found to identify the speaker. The
number of epochs required to get the target decides the network
performance.
Abstract: Traffic Management and Information Systems, which rely on a system of sensors, aim to describe in real-time traffic in urban areas using a set of parameters and estimating them. Though the state of the art focuses on data analysis, little is done in the sense of prediction. In this paper, we describe a machine learning system for traffic flow management and control for a prediction of traffic flow problem. This new algorithm is obtained by combining Random Forests algorithm into Adaboost algorithm as a weak learner. We show that our algorithm performs relatively well on real data, and enables, according to the Traffic Flow Evaluation model, to estimate and predict whether there is congestion or not at a given time on road intersections.
Abstract: In recent years there has been renewal of interest in the
relation between Green IT and Cloud Computing. The growing use of
computers in cloud platform has caused marked energy consumption,
putting negative pressure on electricity cost of cloud data center. This
paper proposes an effective mechanism to reduce energy utilization in
cloud computing environments. We present initial work on the
integration of resource and power management that aims at reducing
power consumption. Our mechanism relies on recalling virtualization
services dynamically according to user-s virtualization request and
temporarily shutting down the physical machines after finish in order
to conserve energy. Given the estimated energy consumption, this
proposed effort has the potential to positively impact power
consumption. The results from the experiment concluded that energy
indeed can be saved by powering off the idling physical machines in
cloud platforms.
Abstract: Primary barrier of membrane type LNG containment system consist of corrugated 304L stainless steel. This 304L stainless steel is austenitic stainless steel which shows different material behaviors owing to phase transformation during the plastic work. Even though corrugated primary barriers are subjected to significant amounts of pre-strain due to press working, quantitative mechanical behavior on the effect of pre-straining at cryogenic temperatures are not available. In this study, pre-strain level and pre-strain temperature dependent tensile tests are carried to investigate mechanical behaviors. Also, constitutive equations with material parameters are suggested for a verification study.
Abstract: This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.
Abstract: The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.
Abstract: Wheat gluten hydrolyzates (WGHs) and anchovy fine
powder hydrolyzates (AFPHs) were produced at 300 MPa using
combinations of Flavourzyme 500MG (F), Alcalase 2.4L (A),
Marugoto E (M) and Protamex (P), and then were compared to those
produced at ambient pressure concerning the contents of soluble solid
(SS), soluble nitrogen and electrophoretic profiles. The contents of SS
in the WGHs and AFPHs increased up to 87.2% according to the
increase in enzyme number both at high and ambient pressure. Based
on SS content, the optimum enzyme combinations for one-, two-,
three- and four-enzyme hydrolysis were determined as F, FA, FAM
and FAMP, respectively. Similar trends were found for the contents of
total soluble nitrogen (TSN) and TCA-soluble nitrogen (TCASN). The
contents of SS, TSN and TCASN in the hydrolyzates together with
electrophoretic mobility maps indicates that the high-pressure
treatment of this study accelerated protein hydrolysis compared to
ambient-pressure treatment.
Abstract: The users are now expecting higher level of
DSP(Digital Signal Processing) software quality than ever before.
Prevention and detection of defect are critical elements of software
quality assurance. In this paper, principles and rules for prevention and
detection of defect are suggested, which are not universal guidelines,
but are useful for both novice and experienced DSP software
developers.
Abstract: Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.
Abstract: Investment in a constructed facility represents a cost in
the short term that returns benefits only over the long term use of the
facility. Thus, the costs occur earlier than the benefits, and the owners
of facilities must obtain the capital resources to finance the costs of
construction. A project cannot proceed without an adequate
financing, and the cost of providing an adequate financing can be
quite large. For these reasons, the attention to the project finance is an
important aspect of project management. Finance is also a concern to
the other organizations involved in a project such as the general
contractor and material suppliers. Unless an owner immediately and
completely covers the costs incurred by each participant, these
organizations face financing problems of their own. At a more
general level, the project finance is the only one aspect of the general
problem of corporate finance. If numerous projects are considered
and financed together, then the net cash flow requirements constitute
the corporate financing problem for capital investment. Whether
project finance is performed at the project or at the corporate level
does not alter the basic financing problem .In this paper, we will first
consider facility financing from the owner's perspective, with due
consideration for its interaction with other organizations involved in a
project. Later, we discuss the problems of construction financing
which are crucial to the profitability and solvency of construction
contractors. The objective of this paper is to present the steps utilized
to determine the best combination of minimum project financing.
The proposed model considers financing; schedule and maximum net
area .The proposed model is called Project Financing and Schedule
Integration using Genetic Algorithms "PFSIGA". This model
intended to determine more steps (maximum net area) for any project
with a subproject. An illustrative example will demonstrate the
feature of this technique. The model verification and testing are put
into consideration.
Abstract: Today, the Internet based communication has widen
the opportunity of event monitoring system in the medical field.
There is always a need of analyzing and designing secure and reliable
mobile communication between the hospital and biomedical
engineers mobile units. This study has been carried out to find
possible solution using SIP-based event notification for alerting the
technical staff about the Biomedical Device (BMD) status and
Patients treatment session. The Session Initiation Protocol (SIP) can
be used to create a medical event notification system. SIP can work
on a variety of devices. Its adoption as the protocol of choice for third
generation wireless networks allows for a robust and scalable
environment. One of the advantages of SIP is that it supports personal
mobility through the separation of user addressing and device
addressing. The solution for Telemed alert notification system is
based on SIP - Specific Event Notification. The aim of this project is
to extend mobility service to the hospital technicians who are using
Telemedicine system.
Abstract: The group mutual exclusion (GME) problem is an
interesting generalization of the mutual exclusion problem. In the
group mutual exclusion, multiple processes can enter a critical
section simultaneously if they belong to the same group. In the
extended group mutual exclusion, each process is a member of
multiple groups at the same time. As a result, after the process by
selecting a group enter critical section, other processes can select the
same group with its belonging group and can enter critical section at
the moment, so that it avoids their unnecessary blocking. This paper
presents a quorum-based distributed algorithm for the extended
group mutual exclusion problem. The message complexity of our
algorithm is O(4Q ) in the best case and O(5Q) in the worst case,
where Q is a quorum size.
Abstract: This paper explains a project based learning method where autonomous mini-robots are developed for research, education and entertainment purposes. In case of remote systems wireless sensors are developed in critical areas, which would collect data at specific time intervals, send the data to the central wireless node based on certain preferred information would make decisions to turn on or off a switch or control unit. Such information transfers hardly sums up to a few bytes and hence low data rates would suffice for such implementations. As a robot is a multidisciplinary platform, the interfacing issues involved are discussed in this paper. The paper is mainly focused on power supply, grounding and decoupling issues.
Abstract: Perspective of food security in 21 century showed
shortage of food that production is faced to vital problem. Food
security strategy is applied longtime method to assess required food.
Meanwhile, nanotechnology revolution changes the world face.
Nanotechnology is adequate method utilize of its characteristics to
decrease environmental problems and possible further access to food
for small farmers. This article will show impact of production and
adoption of nanocrops on food security. Population is researchers of
agricultural research center of Esfahan province. The results of study
show that there was a relationship between uses, conversion,
distribution, and production of nanocrops, operative human
resources, operative circumstance, and constrains of usage of
nanocrops and food security. Multivariate regression analysis by
enter model shows that operative circumstance, use, production and
constrains of usage of nanocrops had positive impact on food security
and they determine in four steps 20 percent of it.
Abstract: The technical realization of data transmission using
glass fiber began after the development of diode laser in year 1962.
The erbium doped fiber amplifiers (EDFA's) in high speed networks
allow information to be transmitted over longer distances without
using of signal amplification repeaters. These kinds of fibers are
doped with erbium atoms which have energy levels in its atomic
structure for amplifying light at 1550nm. When a carried signal wave
at 1550nm enters the erbium fiber, the light stimulates the excited
erbium atoms which pumped with laser beam at 980nm as additional
light. The wavelength and intensity of the semiconductor lasers
depend on the temperature of active zone and the injection current.
The present paper shows the effect of the diode lasers temperature
and injection current on the optical amplification. From the results of
in- and output power one may calculate the max. optical gain by
erbium doped fiber amplifier.
Abstract: Mathematical, graphical and intuitive models are often
constructed in the development process of computational systems.
The Unified Modeling Language (UML) is one of the most popular
modeling languages used by practicing software engineers. This
paper critically examines UML models and suggests an augmented
use case view with the addition of new constructs for modeling
software. It also shows how a use case diagram can be enhanced. The
improved modeling constructs are presented with examples for
clarifying important design and implementation issues.
Abstract: The main aim of this research is to investigate a novel technique for implementing a more natural and intelligent conversation system. Conversation systems are designed to converse like a human as much as their intelligent allows. Sometimes, we can think that they are the embodiment of Turing-s vision. It usually to return a predetermined answer in a predetermined order, but conversations abound with uncertainties of various kinds. This research will focus on an integrated natural language processing approach. This approach includes an integrated knowledge-base construction module, a conversation understanding and generator module, and a state manager module. We discuss effectiveness of this approach based on an experiment.
Abstract: An advanced composite flywheel rotor consisting of
intra and inter hybrid rims was designed to optimally increase the energy capacity, and was manufactured using filament winding with
in-situ curing. The flywheel has recently attracted considerable attention from many investigators since it possesses great potential in
many energy storage applications, including electric utilities, hybrid or
electric automobiles, and space vehicles. In this investigation, a comprehensive study was conducted with the intent to implement
composites in high performance flywheel applications.The inner two
intra-hybrid rims (rims 1 and 2) were manufactured as a whole part
through continuous filament winding under in-situ curing conditions,
and so were the outer two rims (rims 3 and 4). The outer surface of rim
2 and the inner surface of rim 3 were CNC-tapered for press-fitting. Machined rims were finally press-fitted using a hydraulic press with a
maximum compressive force of approximately 1000 ton.
Abstract: The necessity of updating the numerical models inputs, because of geometrical and resistive variations in rivers subject to solid transport phenomena, requires detailed control and monitoring activities. The human employment and financial resources of these activities moves the research towards the development of expeditive methodologies, able to evaluate the outflows through the measurement of more easily acquirable sizes. Recent studies highlighted the dependence of the entropic parameter on the kinematical and geometrical flow conditions. They showed a meaningful variability according to the section shape, dimension and slope. Such dependences, even if not yet well defined, could reduce the difficulties during the field activities, and also the data elaboration time. On the basis of such evidences, the relationships between the entropic parameter and the geometrical and resistive sizes, obtained through a large and detailed laboratory experience on steady free surface flows in conditions of macro and intermediate homogeneous roughness, are analyzed and discussed.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.