Abstract: In this study we applied thermal lens (TL) technique
to study the effect of size on thermal diffusivity of cadmium sulphide
(CdS) nanofluid prepared by using γ-radiation method containing
particles with different sizes. In TL experimental set up a diode laser
of wavelength 514 nm and intensity stabilized He-Ne laser were used
as the excitation source and the probe beam respectively,
respectively. The experimental results showed that the thermal
diffusivity value of CdS nanofluid increases when the of particle size
increased.
Abstract: This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.
Abstract: The goal of this project is to design a system to
recognition voice commands. Most of voice recognition systems
contain two main modules as follow “feature extraction" and “feature
matching". In this project, MFCC algorithm is used to simulate
feature extraction module. Using this algorithm, the cepstral
coefficients are calculated on mel frequency scale. VQ (vector
quantization) method will be used for reduction of amount of data to
decrease computation time. In the feature matching stage Euclidean
distance is applied as similarity criterion. Because of high accuracy
of used algorithms, the accuracy of this voice command system is
high. Using these algorithms, by at least 5 times repetition for each
command, in a single training session, and then twice in each testing
session zero error rate in recognition of commands is achieved.
Abstract: Investment in a constructed facility represents a cost in
the short term that returns benefits only over the long term use of the
facility. Thus, the costs occur earlier than the benefits, and the owners
of facilities must obtain the capital resources to finance the costs of
construction. A project cannot proceed without an adequate
financing, and the cost of providing an adequate financing can be
quite large. For these reasons, the attention to the project finance is an
important aspect of project management. Finance is also a concern to
the other organizations involved in a project such as the general
contractor and material suppliers. Unless an owner immediately and
completely covers the costs incurred by each participant, these
organizations face financing problems of their own. At a more
general level, the project finance is the only one aspect of the general
problem of corporate finance. If numerous projects are considered
and financed together, then the net cash flow requirements constitute
the corporate financing problem for capital investment. Whether
project finance is performed at the project or at the corporate level
does not alter the basic financing problem .In this paper, we will first
consider facility financing from the owner's perspective, with due
consideration for its interaction with other organizations involved in a
project. Later, we discuss the problems of construction financing
which are crucial to the profitability and solvency of construction
contractors. The objective of this paper is to present the steps utilized
to determine the best combination of minimum project financing.
The proposed model considers financing; schedule and maximum net
area .The proposed model is called Project Financing and Schedule
Integration using Genetic Algorithms "PFSIGA". This model
intended to determine more steps (maximum net area) for any project
with a subproject. An illustrative example will demonstrate the
feature of this technique. The model verification and testing are put
into consideration.
Abstract: The necessity of updating the numerical models inputs, because of geometrical and resistive variations in rivers subject to solid transport phenomena, requires detailed control and monitoring activities. The human employment and financial resources of these activities moves the research towards the development of expeditive methodologies, able to evaluate the outflows through the measurement of more easily acquirable sizes. Recent studies highlighted the dependence of the entropic parameter on the kinematical and geometrical flow conditions. They showed a meaningful variability according to the section shape, dimension and slope. Such dependences, even if not yet well defined, could reduce the difficulties during the field activities, and also the data elaboration time. On the basis of such evidences, the relationships between the entropic parameter and the geometrical and resistive sizes, obtained through a large and detailed laboratory experience on steady free surface flows in conditions of macro and intermediate homogeneous roughness, are analyzed and discussed.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: The advent of multi-million gate Field Programmable
Gate Arrays (FPGAs) with hardware support for multiplication opens
an opportunity to recreate a significant portion of the front end of a
human cochlea using this technology. In this paper we describe the
implementation of the cochlear filter and show that it is entirely
suited to a single device XC3S500 FPGA implementation .The filter
gave a good fit to real time data with efficiency of hardware usage.
Abstract: Seismic qualification testing for equipments to be
mounted on upper storeys of buildings is very demanding in terms of
floor spectra. The latter is characterized by high accelerations
amplitudes within a narrow frequency band. This article presents a
method which permits to cover specified required response spectra
beyond the shaking table capability by amplifying the acceleration
amplitudes at an appropriate frequency range using a physical
intermediate mounted on the platform of the shaker.
Abstract: Weather disaster events were frequent and caused loss
of lives and property in Taiwan recently. Excessive concentration of
population and lacking of integrated planning led to Taiwanese coastal
zone face the impacts of climate change directly. Comparing to many
countries which have already set up legislation, competent authorities
and national adaptation strategies, the ability of coastal management
adapting to climate change is still insufficient in Taiwan. Therefore, it
is necessary to establish a complete institutional arrangement for
coastal management due to climate change in order to protect
environment and sustain socio-economic development. This paper
firstly reviews the impact of climate change on Taiwanese coastal
zone. Secondly, development of Taiwanese institutional arrangement
of coastal management is introduced. Followed is the analysis of four
dimensions of legal basis, competent authority, scientific and financial
support and international cooperations of institutional arrangement.
The results show that Taiwanese government shall: 1) integrate climate
change issue into Coastal Act, Wetland Act and territorial planning
Act and pass them; 2) establish the high level competent authority for
coastal management; 3) set up the climate change disaster coordinate
platform; 4) link scientific information and decision markers; 5)
establish the climate change adjustment fund; 6) participate in
international climate change organizations and meetings actively; 7)
cooperate with near countries to exchange experiences.
Abstract: This research was conducted for the first time at the
southeastern coasts of the Caspian Sea in order to evaluate the
performance of osteichthyes cooperatives through production (catch)
function. Using one of the indirect valuation methods in this research,
contributory factors in catch were identified and were inserted into
the function as independent variables. In order to carry out this
research, the performance of 25 Osteichthyes catching cooperatives
in the utilization year of 2009 which were involved in fishing in
Miankale wildlife refuge region. The contributory factors in catch
were divided into groups of economic, ecological and biological
factors. In the mentioned function, catch rate of the cooperative were
inserted into as the dependant variable and fourteen partial variables
in terms of nine general variables as independent variables. Finally,
after function estimation, seven variables were rendered significant at
99 percent reliably level. The results of the function estimation
indicated that human resource (fisherman quantity) had the greatest
positive effect on catch rate with an influence coefficient of 1.7 while
weather conditions had the greatest negative effect on the catch rate
of cooperatives with an influence coefficient of -2.07. Moreover,
factors like member's share, experience and fisherman training and
fishing effort played the main roles in the catch rate of cooperative
with influence coefficients of 0.81, 0.5 and 0.21, respectively.
Abstract: This paper describes a CMOS four-quadrant
multiplier intended for use in the front-end receiver by utilizing the
square-law characteristic of the MOS transistor in the saturation
region. The circuit is based on 0.35 um CMOS technology simulated
using HSPICE software. The mixer has a third-order inter the power
consumption is 271uW from a single 1.2V power supply. One of the
features of the proposed design is using two MOS transistors
limitation to reduce the supply voltage, which leads to reduce the
power consumption. This technique provides a GHz bandwidth
response and low power consumption.
Abstract: This study applied Theory of Planned Behaviour
(TPB) to explain the knowledge sharing behaviour among academic
staff at a Public Higher Education Institution (HEI) in Malaysia. The
main objectives of this study are; to identify the components that
influence knowledge sharing behaviour and to determine the levels of
knowledge sharing behaviour among academic staff. A total of 200
respondents were participated in answering questionnaires. The
findings of this study revealed that level of perceiving and
implementing knowledge sharing behaviour among academic staff at
a Public HEI in Malaysia exist but not openly or strongly practiced.
The findings were discussed and recommendations for the future
research were also addressed.
Abstract: High speed networks provide realtime variable bit rate
service with diversified traffic flow characteristics and quality
requirements. The variable bit rate traffic has stringent delay and
packet loss requirements. The burstiness of the correlated traffic
makes dynamic buffer management highly desirable to satisfy the
Quality of Service (QoS) requirements. This paper presents an
algorithm for optimization of adaptive buffer allocation scheme for
traffic based on loss of consecutive packets in data-stream and buffer
occupancy level. Buffer is designed to allow the input traffic to be
partitioned into different priority classes and based on the input
traffic behavior it controls the threshold dynamically. This algorithm
allows input packets to enter into buffer if its occupancy level is less
than the threshold value for priority of that packet. The threshold is
dynamically varied in runtime based on packet loss behavior. The
simulation is run for two priority classes of the input traffic –
realtime and non-realtime classes. The simulation results show that
Adaptive Partial Buffer Sharing (ADPBS) has better performance
than Static Partial Buffer Sharing (SPBS) and First In First Out
(FIFO) queue under the same traffic conditions.
Abstract: The necessity of accurate and timely field data is
shared among organizations engaged in fundamentally different
activities, public services or commercial operations. Basically, there
are three major components in the process of the qualitative research:
data collection, interpretation and organization of data, and analytic
process. Representative technological advancements in terms of
innovation have been made in mobile devices (mobile phone, PDA-s,
tablets, laptops, etc). Resources that can be potentially applied on the
data collection activity for field researches in order to improve this
process.
This paper presents and discuss the main features of a mobile
phone based solution for field data collection, composed of basically
three modules: a survey editor, a server web application and a client
mobile application. The data gathering process begins with the
survey creation module, which enables the production of tailored
questionnaires. The field workforce receives the questionnaire(s) on
their mobile phones to collect the interviews responses and sending
them back to a server for immediate analysis.
Abstract: In this paper, a tooth shape optimization method for
cogging torque reduction in Permanent Magnet (PM) motors is
developed by using the Reduced Basis Technique (RBT) coupled by
Finite Element Analysis (FEA) and Design of Experiments (DOE)
methods. The primary objective of the method is to reduce the
enormous number of design variables required to define the tooth
shape. RBT is a weighted combination of several basis shapes. The
aim of the method is to find the best combination using the weights
for each tooth shape as the design variables. A multi-level design
process is developed to find suitable basis shapes or trial shapes at
each level that can be used in the reduced basis technique. Each level
is treated as a separated optimization problem until the required
objective – minimum cogging torque – is achieved. The process is
started with geometrically simple basis shapes that are defined by
their shape co-ordinates. The experimental design of Taguchi method
is used to build the approximation model and to perform
optimization. This method is demonstrated on the tooth shape
optimization of a 8-poles/12-slots PM motor.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques to extract phrases
from documents and the query and doing stemming on words. Then
an ontology based conceptual method will be used to annotate
documents and expand the query. To expand a query the spread
activation algorithm is improved so that the expansion can be done
flexible and in various aspects. The annotated documents and the
expanded query will be processed to compute the relevance degree
exploiting statistical methods. The outstanding features of our
approach are (1) combining conceptual, statistical and linguistic
features of documents, (2) expanding the query with its related
concepts before comparing to documents, (3) extracting and using
both words and phrases to compute relevance degree, (4) improving
the spread activation algorithm to do the expansion based on
weighted combination of different conceptual relationships and (5)
allowing variable document vector dimensions. A ranking system
called ORank is developed to implement and test the proposed
model. The test results will be included at the end of the paper.
Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Abstract: In order to perform on-line measuring and detection
of PD signals, a total solution composing of an HFCT, A/D
converter and a complete software package is proposed. The
software package includes compensation of HFCT contribution,
filtering and noise reduction using wavelet transform and soft
calibration routines. The results have shown good performance and
high accuracy.
Abstract: Mycophenolic acid “MPA" is a secondary metabolite
of Penicillium bervicompactum with antibiotic and
immunosuppressive properties. In this study, fermentation process
was established for production of mycophenolic acid by Penicillium
bervicompactum MUCL 19011 in shake flask. The maximum MPA
production, product yield and productivity were 1.379 g/L, 18.6 mg/g
glucose and 4.9 mg/L.h respectively. Glucose consumption, biomass
and MPA production profiles were investigated during fermentation
time. It was found that MPA production starts approximately after
180 hours and reaches to a maximum at 280 h. In the next step, the
effects of methionine and acetate concentrations on MPA production
were evaluated. Maximum MPA production, product yield and
productivity (1.763 g/L, 23.8 mg/g glucose and 6.30 mg/L. h
respectively) were obtained with using 2.5 g/L methionine in culture
medium. Further addition of methionine had not more positive effect
on MPA production. Finally, results showed that the addition of
acetate to the culture medium had not any observable effect on MPA
production
Abstract: We have developed a microfluidic device system for the continuous producting of nanoparticles, and we have clarified the relationship between the mixing performance of reactors and the particle size. First, we evaluated the mixing performance of reactors by carring out the Villermaux–Dushman reaction and determined the experimental conditions for producing AgCl nanoparticles. Next, we produced AgCl nanoparticles and evaluated the mixing performance and the particle size. We found that as the mixing performance improves the size of produced particles decreases and the particle size distribution becomes sharper. We produced AgCl nanoparticles with a size of 86 nm using the microfluidic device that had the best mixing performance among the three reactors we tested in this study; the coefficient of variation (Cv) of the size distribution of the produced nanoparticles was 26.1%.