Abstract: The Kumamoto area, Kyushu, Japan has 1,041km2 in
area and about 1milion in population. This area is a greatest area in Japan which depends on groundwater for all of drinking water. Quantity of this local groundwater use is about 200MCM during the
year. It is understood that the main recharging area of groundwater exist in the rice field zone which have high infiltrate height ahead of
100mm/ day of the irrigated water located in the middle area of the Shira-River Basin. However, by decrease of the paddy-rice planting
area by urbanization and an acreage reduction policy, the groundwater income and expenditure turned worse. Then Kumamoto city and four
companies expended financial support to increase recharging water to
underground by ponded water in the field from 2004.
In this paper, the author reported the situation of recovery of groundwater by recharge and estimates the efficiency of recharge by
statistical method.
Abstract: In North America, Most power distribution systems
employ a four-wire multi-grounded neutral (MGN) design. This paper has explained the inherent characteristics of multi-grounded three-phase four-wire distribution systems under unbalanced
situations. As a result, the mechanism of voltage swell and voltage sag in MGN feeders becomes difficult to understand. The simulation
tool that has been used in this paper is MATLAB under Windows software. In this paper the equivalent model of a full-scale multigrounded
distribution system implemented by MATLAB is
introduced. The results are expected to help utility engineers to understand the impact of MGN on distribution system operations.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: Amongst the consistently fluctuating conditions
prevailing today, changeability represents a strategic key factor for a
manufacturing company to achieve success on the international
markets. In order to cope with turbulences and the increasing level of
incalculability, not only the flexible design of production systems but
in particular the employee as enabler of change provide the focus
here. It is important to enable employees from manufacturing
companies to participate actively in change events and in change
decisions. To this end, the learning factory has been created, which is
intended to serve the development of change-promoting competences
and the sensitization of employees for the necessity of changes.
Abstract: A computer model of Quantum Theory (QT) has been
developed by the author. Major goal of the computer model was
support and demonstration of an as large as possible scope of QT.
This includes simulations for the major QT (Gedanken-) experiments
such as, for example, the famous double-slit experiment.
Besides the anticipated difficulties with (1) transforming exacting
mathematics into a computer program, two further types of problems
showed up, namely (2) areas where QT provides a complete mathematical
formalism, but when it comes to concrete applications the
equations are not solvable at all, or only with extremely high effort;
(3) QT rules which are formulated in natural language and which do
not seem to be translatable to precise mathematical expressions, nor
to a computer program.
The paper lists problems in all three categories and describes also
the possible solutions or circumventions developed for the computer
model.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: The purpose of this study is to find natural gait of
biped robot such as human being by analyzing the COG (Center Of
Gravity) trajectory of human being's gait. It is discovered that human
beings gait naturally maintain the stability and use the minimum
energy. This paper intends to find the natural gait pattern of biped
robot using the minimum energy as well as maintaining the stability by
analyzing the human's gait pattern that is measured from gait image on
the sagittal plane and COG trajectory on the frontal plane. It is not
possible to apply the torques of human's articulation to those of biped
robot's because they have different degrees of freedom. Nonetheless,
human and 5-link biped robots are similar in kinematics. For this, we
generate gait pattern of the 5-link biped robot by using the GA
algorithm of adaptation gait pattern which utilize the human's ZMP
(Zero Moment Point) and torque of all articulation that are measured
from human's gait pattern. The algorithm proposed creates biped
robot's fluent gait pattern as that of human being's and to minimize
energy consumption because the gait pattern of the 5-link biped robot
model is modeled after consideration about the torque of human's each
articulation on the sagittal plane and ZMP trajectory on the frontal
plane. This paper demonstrate that the algorithm proposed is superior
by evaluating 2 kinds of the 5-link biped robot applied to each gait
patterns generated both in the general way using inverse kinematics
and in the special way in which by considering visuality and
efficiency.
Abstract: Bridges are one of the main components of
transportation networks. They should be functional before and after
earthquake for emergency services. Therefore we need to assess
seismic performance of bridges under different seismic loadings.
Fragility curve is one of the popular tools in seismic evaluations. The
fragility curves are conditional probability statements, which give the
probability of a bridge reaching or exceeding a particular damage
level for a given intensity level. In this study, the seismic
performance of a two-span simply supported concrete bridge is
assessed. Due to usual lack of empirical data, the analytical fragility
curve was developed by results of the dynamic analysis of bridge
subjected to the different time histories in near-fault area.
Abstract: In today-s new technology era, cluster has become a
necessity for the modern computing and data applications since many
applications take more time (even days or months) for computation.
Although after parallelization, computation speeds up, still time
required for much application can be more. Thus, reliability of the
cluster becomes very important issue and implementation of fault
tolerant mechanism becomes essential. The difficulty in designing a
fault tolerant cluster system increases with the difficulties of various
failures. The most imperative obsession is that the algorithm, which
avoids a simple failure in a system, must tolerate the more severe
failures. In this paper, we implemented the theory of watchdog timer
in a parallel environment, to take care of failures. Implementation of
simple algorithm in our project helps us to take care of different
types of failures; consequently, we found that the reliability of this
cluster improves.
Abstract: We board the problem of creating a seismic alert
system, based upon artificial neural networks, trained by using the
well-known back-propagation and genetic algorithms, in order to emit
the alarm for the population located into a specific city, about an
eminent earthquake greater than 4.5 Richter degrees, and avoiding
disasters and human loses. In lieu of using the propagation wave, we
employed the magnitude of the earthquake, to establish a correlation
between the recorded magnitudes from a controlled area and the city,
where we want to emit the alarm. To measure the accuracy of the
posed method, we use a database provided by CIRES, which contains
the records of 2500 quakes incoming from the State of Guerrero
and Mexico City. Particularly, we performed the proposed method to
generate an issue warning in Mexico City, employing the magnitudes
recorded in the State of Guerrero.
Abstract: Every commercial bank optimises its asset portfolio
depending on the profitability of assets and chosen or imposed
constraints. This paper proposes and applies a stylized model for
optimising banks' asset and liability structure, reflecting profitability
of different asset categories and their risks as well as costs associated
with different liability categories and reserve requirements. The level
of detail for asset and liability categories is chosen to create a
suitably parsimonious model and to include the most important
categories in the model. It is shown that the most appropriate
optimisation criterion for the model is the maximisation of the ratio
of net interest income to assets. The maximisation of this ratio is
subject to several constraints. Some are accounting identities or
dictated by legislative requirements; others vary depending on the
market objectives for a particular bank. The model predicts variable
amount of assets allocated to loan provision.
Abstract: In this paper, an improved technique for contingency
ranking using artificial neural network (ANN) is presented. The
proposed approach is based on multi-layer perceptrons trained by
backpropagation to contingency analysis. Severity indices in dynamic
stability assessment are presented. These indices are based on the
concept of coherency and three dot products of the system variables.
It is well known that some indices work better than others for a
particular power system. This paper along with test results using
several different systems, demonstrates that combination of indices
with ANN provides better ranking than a single index. The presented
results are obtained through the use of power system simulation
(PSS/E) and MATLAB 6.5 software.
Abstract: The Beshar River is one aquatic ecosystem, which is
located next to the city of Yasuj in southern Iran. The Beshar river
has been contaminated by industrial factories such as effluent of
sugar factory, agricultural and other activities in this region such as,
Imam Sajjad hospital, drainage from agricultural farms, Yasuj urban
surface runoff and effluent of wastewater treatment plants ,specially
Yasuj waste water treatment plant. In order to evaluate the effects of
these pollutants on the quality of the Beshar river, five monitoring
stations were selected along its course. The first station is located
upstream of Yasuj near the Dehnow village; stations 2 to 4 are
located east, south and west of city; and the 5th station is located
downstream of Yasuj. Several water quality parameters were
sampled. These include pH, dissolved oxygen, biological oxygen
demand (BOD), temperature, conductivity, turbidity, total dissolved
solids and discharge or flow measurements. Water samples from the
five stations were collected and analyzed to determine the following
physicochemical parameters: EC, pH, T.D.S, T.H, No2, DO, BOD5,
COD during 2008 to 2010. The study shows that the BOD5 value of
station 1 is at a minimum (1.7 ppm) and increases downstream from
stations 2 to 4 to a maximum (11.6 ppm), and then decreases at
station 5. The DO values of station 1 is a maximum (8.45 ppm),
decreases downstream to stations 2 - 4 which are at a minimum (3.1
ppm), before increasing at station 5. The amount of BOD and TDS
are highest at the 4th station and the amount of DO is lowest at this
station, marking the 4th station as more highly polluted than the
other stations .This study shows average amount of the water quality
parameters in first year of sampling (2008) have had a better quality
relation to third year in 2010 because of recent drought in this region
and pollutant increasing .As the Beshar river path after 5th station
goes through the mountain area with more slope and flow velocity
,so the physicochemical parameters improve at the 5th station due to
pollutant degradation and dilution. Finally the point and nonpoint
pollutant sources of Beshar river were determined and compared to
the monitoring results.
Abstract: The article is devoted to Kazakh repatriates and their
migration to Kazakhstan as historical homeland, and also addresses
the problem of migrants- adaptation in the republic, particularly in
Almaty oblast (region). The authors used up-to-date statictics and
materials of the Department of Migration Committee to analyze the
newcomers- number and features of the repatriate-s location in this
oblast. Having studied this region they were able to identify the main
reasons why Kazakh Diaspora in Central Asia, Iran, Avganistana and
Turkey is eager to come back to their historic homeland along with
repatriates adaptation to the republic.
Abstract: There have been widespread applications of fluidized beds in industries which are related to the combination of gas-solid particles during the last decade. For instance, in order to crack the catalyses in petrochemical industries or as a drier in food industries. High capacity of fluidized bed in heat and mass transfer has made this device very popular. In order to achieve a higher efficiency of fluidized beds, a particular attention has been paid to beds with pulsating air flow. In this paper, a fluidized bed device with pulsating flow has been designed and constructed. Size of particles have been used during the test are in the range of 40 to 100μm. The purpose of this experimental test is to investigate the air flow regime, observe the particles- movement and measure the pressure loss along the bed. The effects of pulsation can be evaluated by comparing the results for both continuous and pulsating flow. Results of both situations are compared for various gas speeds. Moreover the above experiment is numerically simulated by using Fluent software and its numerical results are compared with the experimental results.
Abstract: Human activity is a major concern in a wide variety of
applications, such as video surveillance, human computer interface
and face image database management. Detecting and recognizing
faces is a crucial step in these applications. Furthermore, major
advancements and initiatives in security applications in the past years
have propelled face recognition technology into the spotlight. The
performance of existing face recognition systems declines significantly
if the resolution of the face image falls below a certain level.
This is especially critical in surveillance imagery where often, due to
many reasons, only low-resolution video of faces is available. If these
low-resolution images are passed to a face recognition system, the
performance is usually unacceptable. Hence, resolution plays a key
role in face recognition systems. In this paper we introduce a new
low resolution face recognition system based on mixture of expert
neural networks. In order to produce the low resolution input images
we down-sampled the 48 × 48 ORL images to 12 × 12 ones using
the nearest neighbor interpolation method and after that applying
the bicubic interpolation method yields enhanced images which is
given to the Principal Component Analysis feature extractor system.
Comparison with some of the most related methods indicates that
the proposed novel model yields excellent recognition rate in low
resolution face recognition that is the recognition rate of 100% for
the training set and 96.5% for the test set.
Abstract: IT infrastructures are becoming more and more
difficult. Therefore, in the first industrial IT systems, the P2P
paradigm has replaced the traditional client server and methods of
self-organization are gaining more and more importance. From the
past it is known that especially regular structures like grids may
significantly improve the system behavior and performance. This
contribution introduces a new algorithm based on a biologic
analogue, which may provide the growth of several regular structures
on top of anarchic grown P2P- or social network structures.
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Banishing hunger from the face of earth has been
frequently expressed in various international, national and regional
level conferences since 1974. Providing food security has become
important issue across the world particularly in developing countries.
In a developing country like India, where growth rate of population is
more than that of the food grains production, food security is a
question of great concern. According to the International Food Policy
Research Institute's Global Hunger Index, 2011, India ranks 67 of the
81 countries of the world with the worst food security status. After
Green Revolution, India became a food surplus country. Its
production has increased from 74.23 million tonnes in 1966-67 to
257.44 million tonnes in 2011-12. But after achieving selfsufficiency
in food during last three decades, the country is now
facing new challenges due to increasing population, climate change,
stagnation in farm productivity. Therefore, the main objective of the
present paper is to examine the food security situation at national
level in the country and further to explain the paradox of food
insecurity in a food surplus state of India i.e in Punjab at micro level.
In order to achieve the said objectives, secondary data collected from
the Ministry of Agriculture and the Agriculture department of Punjab
State was analyzed. The result of the study showed that despite
having surplus food production the country is still facing food
insecurity problem at micro level. Within the Kandi belt of Punjab
state, the area adjacent to plains is food secure while the area along
the hills falls in food insecure zone.
The present paper is divided into following three sections (i)
Introduction, (ii) Analysis of food security situation at national level
as well as micro level (Kandi belt of Punjab State) (iii) Concluding
Observations
Abstract: Perth will run out of available sustainable natural
water resources by 2015 if nothing is done to slow usage rates,
according to a Western Australian study [1]. Alternative water
technology options need to be considered for the long-term
guaranteed supply of water for agricultural, commercial, domestic
and industrial purposes. Seawater is an alternative source of water for
human consumption, because seawater can be desalinated and
supplied in large quantities to a very high quality.
While seawater desalination is a promising option, the technology
requires a large amount of energy which is typically generated from
fossil fuels. The combustion of fossil fuels emits greenhouse gases
(GHG) and, is implicated in climate change. In addition to
environmental emissions from electricity generation for desalination,
greenhouse gases are emitted in the production of chemicals and
membranes for water treatment. Since Australia is a signatory to the
Kyoto Protocol, it is important to quantify greenhouse gas emissions
from desalinated water production.
A life cycle assessment (LCA) has been carried out to determine
the greenhouse gas emissions from the production of 1 gigalitre (GL)
of water from the new plant. In this LCA analysis, a new desalination
plant that will be installed in Bunbury, Western Australia, and known
as Southern Seawater Desalinization Plant (SSDP), was taken as a
case study. The system boundary of the LCA mainly consists of three
stages: seawater extraction, treatment and delivery. The analysis
found that the equivalent of 3,890 tonnes of CO2 could be emitted
from the production of 1 GL of desalinated water. This LCA analysis
has also identified that the reverse osmosis process would cause the
most significant greenhouse emissions as a result of the electricity
used if this is generated from fossil fuels