Abstract: Among all geo-hydrological relationships, rainfallrunoff
relationship is of utmost importance in any hydrological
investigation and water resource planning. Spatial variation, lag time
involved in obtaining areal estimates for the basin as a whole can
affect the parameterization in design stage as well as in planning
stage. In conventional hydrological processing of data, spatial aspect
is either ignored or interpolated at sub-basin level. Temporal
variation when analysed for different stages can provide clues for its
spatial effectiveness. The interplay of space-time variation at pixel
level can provide better understanding of basin parameters.
Sustenance of design structures for different return periods and their
spatial auto-correlations should be studied at different geographical
scales for better management and planning of water resources.
In order to understand the relative effect of spatio-temporal
variation in hydrological data network, a detailed geo-hydrological
analysis of Betwa river catchment falling in Lower Yamuna Basin is
presented in this paper. Moreover, the exact estimates about the
availability of water in the Betwa river catchment, especially in the
wake of recent Betwa-Ken linkage project, need thorough scientific
investigation for better planning. Therefore, an attempt in this
direction is made here to analyse the existing hydrological and
meteorological data with the help of SPSS, GIS and MS-EXCEL
software. A comparison of spatial and temporal correlations at subcatchment
level in case of upper Betwa reaches has been made to
demonstrate the representativeness of rain gauges. First, flows at
different locations are used to derive correlation and regression
coefficients. Then, long-term normal water yield estimates based on
pixel-wise regression coefficients of rainfall-runoff relationship have
been mapped. The areal values obtained from these maps can
definitely improve upon estimates based on point-based
extrapolations or areal interpolations.
Abstract: The purpose of this paper is to shed light on the
controversial subject of tax incentives to promote regional
development. Although extensive research has been conducted, a
review of the literature gives an inconclusive answer to whether
economic incentives are effective. One reason is the fact that for
some researchers “effective" means the significant location of new
firms in targeted areas, while for others the creation of jobs
regardless if new firms are arriving in a significant fashion. We
present this dichotomy by analyzing a tax incentive program via both
alternatives: location and job creation. The contribution of the paper
is to inform policymakers about the potential opportunities and
pitfalls when designing incentive strategies. This is particularly
relevant, given that both the US and Europe have been promoting
incentives as a tool for regional economic development.
Abstract: The objective of this study is to investigate fire
behaviors, experimentally and numerically, in a scaled version of an
underground station. The effect of ventilation velocity on the fire is
examined. Fire experiments are simulated by burning 10 ml
isopropyl alcohol fuel in a fire pool with dimensions 5cm x 10cm x 4
mm at the center of 1/100 scaled underground station model. A
commercial CFD program FLUENT was used in numerical
simulations. For air flow simulations, k-ω SST turbulence model and
for combustion simulation, non-premixed combustion model are
used. This study showed that, the ventilation velocity is increased
from 1 m/s to 3 m/s the maximum temperature in the station is found
to be less for ventilation velocity of 1 m/s. The reason for these
experimental result lies on the relative dominance of oxygen supply
effect on cooling effect. Without piston effect, maximum temperature
occurs above the fuel pool. However, when the ventilation velocity
increased the flame was tilted in the direction of ventilation and the
location of maximum temperature moves along the flow direction.
The velocities measured experimentally in the station at different
locations are well matched by the CFD simulation results. The
prediction of general flow pattern is satisfactory with the smoke
visualization tests. The backlayering in velocity is well predicted by
CFD simulation. However, all over the station, the CFD simulations
predicted higher temperatures compared to experimental
measurements.
Abstract: Low temperature (LT) is one of the most abiotic
stresses causing loss of yield in wheat (T. aestivum). Four major
genes in wheat (Triticum aestivum L.) with the dominant alleles
designated Vrn–A1,Vrn–B1,Vrn–D1 and Vrn4, are known to have
large effects on the vernalization response, but the effects on cold
hardiness are ambiguous. Poor cold tolerance has restricted winter
wheat production in regions of high winter stress [9]. It was known
that nearly all wheat chromosomes [5] or at least 10 chromosomes of
21 chromosome pairs are important in winter hardiness [15]. The
objective of present study was to clarify the role of each chromosome
in cold tolerance. With this purpose we used 20 isogenic lines of
wheat. In each one of these isogenic lines only a chromosome from
‘Bezostaya’ variety (a winter habit cultivar) was substituted to
‘Capple desprez’ variety. The plant materials were planted in
controlled conditions with 20º C and 16 h day length in moderately
cold areas of Iran at Karaj Agricultural Research Station in 2006-07
and the acclimation period was completed for about 4 weeks in a
cold room with 4º C. The cold hardiness of these isogenic lines was
measured by LT50 (the temperature in which 50% of the plants are
killed by freezing stress).The experimental design was completely
randomized block design (RCBD)with three replicates. The results
showed that chromosome 5A had a major effect on freezing
tolerance, and then chromosomes 1A and 4A had less effect on this
trait. Further studies are essential to understanding the importance of
each chromosome in controlling cold hardiness in wheat.
Abstract: This research aimed to study correlation between
work satisfaction and organization core value of officers in
Waterworks Authority, Bangkean Branch. Sample group of the study
was 112 officers who worked in the Waterworks Authority,
Bangkean Branch. Questionnaires were employed as a research tools,
while, Percentage, Mean, Standard Deviation, T-test, One-way
ANOVA, and Pearson Product Moment Correlation were claimed as
statistics used in this study. Researcher found that overall and
individual aspects of work satisfaction namely, work characteristic,
work progress, and colleagues significantly correlated with
organization core value in aspect of perception in choice of work at
0.5, 0.01, and 0.01 respectively. Also, such aspects were compatible
with income at .05 which indicated the low level of correlation, mid
low correlation respectively at the same direction, same direction,
opposite direction, and same direction, correspondingly.
Abstract: In this study is presented a general methodology to
predict the performance of a continuous near-critical fluid extraction
process to remove compounds from aqueous solutions using hollow
fiber membrane contactors. A comprehensive 2D mathematical
model was developed to study Porocritical extraction process. The
system studied in this work is a membrane based extractor of ethanol
and acetone from aqueous solutions using near-critical CO2.
Predictions of extraction percentages obtained by simulations have
been compared to the experimental values reported by Bothun et al.
[5]. Simulations of extraction percentage of ethanol and acetone
show an average difference of 9.3% and 6.5% with the experimental
data, respectively. More accurate predictions of the extraction of
acetone could be explained by a better estimation of the transport
properties in the aqueous phase that controls the extraction of this
solute.
Abstract: The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.
Abstract: The mixing of pollutions and sediments in near shore regions of natural water bodies depends heavily on the characteristics such as the strength and frequency of flow instability. In the present paper, the instability of natural convection induced by absorption of solar radiation in littoral regions is considered. Spectral analysis is conducted on the quasi-steady state flow to reveal the power and frequency modes of the instability at various positions. Results indicate that the power of instability, the number of frequency modes, the prominence of higher frequency modes, and the highest frequency mode increase with the offshore distance and/or Rayleigh number. Harmonic modes are present at relatively low Rayleigh numbers. For a given offshore distance, the position with the strongest power of instability is located adjacent to the sloping bottom while the frequency modes are the same over the local depth. As the Rayleigh number increases, the unstable region extends toward the shore.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.
Abstract: The present paper develops and validates a numerical procedure for the calculation of turbulent combustive flow in converging and diverging ducts and throuh simulation of the heat transfer processes, the amount of production and spread of Nox pollutant has been measured. A marching integration solution procedure employing the TDMA is used to solve the discretized equations. The turbulence model is the Prandtl Mixing Length method. Modeling the combustion process is done by the use of Arrhenius and Eddy Dissipation method. Thermal mechanism has been utilized for modeling the process of forming the nitrogen oxides. Finite difference method and Genmix numerical code are used for numerical solution of equations. Our results indicate the important influence of the limiting diverging angle of diffuser on the coefficient of recovering of pressure. Moreover, due to the intense dependence of Nox pollutant to the maximum temperature in the domain with this feature, the Nox pollutant amount is also in maximum level.
Abstract: The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.
Abstract: In this paper, we first show a relationship between two
stabilizing controllers, which presents an extended feedback system
using two stabilizing controllers. Then, we apply this relationship to
the two-stage compensator design. In this paper, we consider singleinput
single-output plants. On the other hand, we do not assume the
coprime factorizability of the model. Thus, the results of this paper
are based on the factorization approach only, so that they can be
applied to numerous linear systems.
Abstract: In the article the historical formation of interethnic and
interconfessional agreement policy in Kazakhstan and their
developing features at present time will be analyzed.
The successfully pursued by Kazakhstan at the present in the
direction of ethnic and confessional policy is regarded as a major
factor in promoting stability for the country.
Abstract: This paper discusses a new, systematic approach to
the synthesis of a NP-hard class of non-regenerative Boolean
networks, described by FON[FOFF]={mi}[{Mi}], where for every
mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such
that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where
'n' represents the number of distinct primary inputs). The method
automatically ensures exact minimization for certain important selfdual
functions with 2n-1 points in its one-set. The elements meant for
grouping are determined from a newly proposed weighted incidence
matrix. Then the binary value corresponding to the candidate pair is
correlated with the proposed binary value matrix to enable direct
synthesis. We recommend algebraic factorization operations as a post
processing step to enable reduction in literal count. The algorithm
can be implemented in any high level language and achieves best
cost optimization for the problem dealt with, irrespective of the
number of inputs. For other cases, the method is iterated to
subsequently reduce it to a problem of O(n-1), O(n-2),.... and then
solved. In addition, it leads to optimal results for problems exhibiting
higher degree of adjacency, with a different interpretation of the
heuristic, and the results are comparable with other methods.
In terms of literal cost, at the technology independent stage, the
circuits synthesized using our algorithm enabled net savings over
AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of-
Products or ESOP forms) and AND-OR-EXOR logic by 45.57%,
41.78% and 41.78% respectively for the various problems.
Circuit level simulations were performed for a wide variety of
case studies at 3.3V and 2.5V supply to validate the performance of
the proposed method and the quality of the resulting synthesized
circuits at two different voltage corners. Power estimation was
carried out for a 0.35micron TSMC CMOS process technology. In
comparison with AOI logic, the proposed method enabled mean
savings in power by 42.46%. With respect to AND-EXOR logic, the
proposed method yielded power savings to the tune of 31.88%, while
in comparison with AND-OR-EXOR level networks; average power
savings of 33.23% was obtained.
Abstract: Food mileage is one of the important issues concerning environmental sustainability. In this research we have utilized a prototype platform with iterative user-centered testing. With these findings we successfully demonstrate the use of the context of persuasive methods to influence users- attitudes towards the sustainable concept.
Abstract: From past many decades human beings are suffering
from plethora of natural disasters. Occurrence of disasters is a
frequent process; it changes conceptual myths as more and more
advancement are made. Although we are living in technological era
but in developing countries like Pakistan disasters are shaped by
socially constructed roles. The need is to understand the most
vulnerable group of society i.e. females; their issues are complex in
nature because of undermined gender status in the society. There is a
need to identify maximum issues regarding females and to enhance
the achievement of millennium development goals (MDGs). Gender
issues are of great concern all around the globe including Pakistan.
Here female visibility in society is low, and also during disasters, the
failure to understand the reality that concentrates on double burden
including productive and reproductive care. Women have to
contribute a lot in society so we need to make them more disaster
resilient. For this non-structural measures like awareness, trainings
and education must be carried out. In rural and in urban settings in
any disaster like earthquake or flood, elements like gender
perspective, their age, physical health, demographic issues contribute
towards vulnerability. In Pakistan the gender issues in disasters were
of less concern before 2005 earthquake and 2010 floods. Significant
achievements are made after 2010 floods when gender and child cell
was created to provide all facilities to women and girls. The aim of
the study is to highlight all necessary facilities in a disaster to build
coping mechanism in females from basic rights till advance level
including education.
Abstract: By using the method of coincidence degree theory and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence and global exponential stability of anti-periodic solutions for Cohen-Grossberg shunting inhibitory neural networks with delays. An example is given to illustrate our feasible results.
Abstract: The objective of this research is to investigate the
advantages of using large-diameter 0.7 inch prestressing strands in
pretention applications. The advantages of large-diameter strands are
mainly beneficial in the heavy construction applications. Bridges and
tunnels are subjected to a higher daily traffic with an exponential
increase in trucks ultimate weight, which raise the demand for higher
structural capacity of bridges and tunnels. In this research, precast
prestressed I-girders were considered as a case study. Flexure
capacities of girders fabricated using 0.7 inch strands and different
concrete strengths were calculated and compared to capacities of 0.6
inch strands girders fabricated using equivalent concrete strength.
The effect of bridge deck concrete strength on composite deck-girder
section capacity was investigated due to its possible effect on final
section capacity. Finally, a comparison was made to compare the
bridge cross-section of girders designed using regular 0.6 inch strands
and the large-diameter 0.7 inch. The research findings showed that
structural advantages of 0.7 inch strands allow for using fewer bridge
girders, reduced material quantity, and light-weight members. The
structural advantages of 0.7 inch strands are maximized when high
strength concrete (HSC) are used in girder fabrication, and concrete
of minimum 5ksi compressive strength is used in pouring bridge
decks. The use of 0.7 inch strands in bridge industry can partially
contribute to the improvement of bridge conditions, minimize
construction cost, and reduce the construction duration of the project.
Abstract: We report the results of an lattice Boltzmann
simulation of magnetohydrodynamic damping of sidewall convection
in a rectangular enclosure filled with a porous medium. In particular
we investigate the suppression of convection when a steady magnetic
field is applied in the vertical direction. The left and right vertical
walls of the cavity are kept at constant but different temperatures
while both the top and bottom horizontal walls are insulated. The
effects of the controlling parameters involved in the heat transfer and
hydrodynamic characteristics are studied in detail. The heat and mass
transfer mechanisms and the flow characteristics inside the enclosure
depended strongly on the strength of the magnetic field and Darcy
number. The average Nusselt number decreases with rising values of
the Hartmann number while this increases with increasing values of
the Darcy number.
Abstract: Nanotechnology is the science of creating, using and
manipulating objects which have at least one dimension in range of
0.1 to 100 nanometers. In other words, nanotechnology is
reconstructing a substance using its individual atoms and arranging
them in a way that is desirable for our purpose.
The main reason that nanotechnology has been attracting
attentions is the unique properties that objects show when they are
formed at nano-scale. These differing characteristics that nano-scale
materials show compared to their nature-existing form is both useful
in creating high quality products and dangerous when being in
contact with body or spread in environment.
In order to control and lower the risk of such nano-scale particles,
the main following three topics should be considered:
1) First of all, these materials would cause long term diseases that
may show their effects on body years after being penetrated in human
organs and since this science has become recently developed in
industrial scale not enough information is available about their
hazards on body.
2) The second is that these particles can easily spread out in
environment and remain in air, soil or water for very long time,
besides their high ability to penetrate body skin and causing new
kinds of diseases.
3) The third one is that to protect body and environment against
the danger of these particles, the protective barriers must be finer than
these small objects and such defenses are hard to accomplish.
This paper will review, discuss and assess the risks that human and
environment face as this new science develops at a high rate.