Abstract: The development of sustainable utilization water resources is crucial. The ecological environment and water resources systems form the foundation of the existence and development of the social economy. The urban ecological support system depends on these resources as well. This research studies the vulnerability, criticality, and risk of climate change on water supply and demand in the main administrative district of the Taijiang Area (Tainan City). Based on the two situations set in this paper and various factors (indexes), this research adopts two kinds of weights (equal and AHP) to conduct the calculation and establish the water supply and demand risk map for the target year 2039. According to the risk analysis result, which is based on equal weight, only one district belongs to a high-grade district (Grade 4). Based on the AHP weight, 16 districts belong to a high-grade or higher-grade district (Grades 4 and 5), and from among them, two districts belong to the highest grade (Grade 5). These results show that the risk level of water supply and demand in cities is higher than that in towns. The government generally gives more attention to the adjustment strategy in the “cities." However, it should also provide proper adjustment strategies for the “towns" to be able to cope with the risks of water supply and demand.
Abstract: There are very complex communication systems, as
the multifunction radar, MFAR (Multi-Function Array Radar), where
functions are integrated all together, and simultaneously are
performed the classic functions of tracking and surveillance, as all
the functions related to the communication, countermeasures, and
calibration. All these functions are divided into the tasks to execute.
The task scheduler is a key element of the radar, since it does the
planning and distribution of energy and time resources to be shared
and used by all tasks. This paper presents schedulers based on the use
of multiple queue. Several schedulers have been designed and
studied, and it has been made a comparative analysis of different
performed schedulers. The tests and experiments have been done by
means of system software simulation. Finally a suitable set of radar
characteristics has been selected to evaluate the behavior of the task
scheduler working.
Abstract: Since the actuator capacity is limited, in the real
application of active control systems under sever earthquakes it is
conceivable that the actuators saturate, hence the actuator saturation
should be considered as a constraint in design of optimal controllers.
In this paper optimal design of active controllers for nonlinear
structures by considering actuator saturation, has been studied. The
proposed method for designing optimal controllers is based on
defining an optimization problem which the objective has been to
minimize the maximum displacement of structure when a limited
capacity for actuator has been used. To this end a single degree of
freedom (SDF) structure with a bilinear hysteretic behavior has been
simulated under a white noise ground acceleration of different
amplitudes. Active tendon control mechanism, comprised of prestressed
tendons and an actuator, and extended nonlinear Newmark
method based instantaneous optimal control algorithm have been
used. To achieve the best results, the weights corresponding to
displacement, velocity, acceleration and control force in the
performance index have been optimized by the Distributed Genetic
Algorithm (DGA). Results show the effectiveness of the proposed
method in considering actuator saturation. Also based on the
numerical simulations it can be concluded that the actuator capacity
and the average value of required control force are two important
factors in designing nonlinear controllers which consider the actuator
saturation.
Abstract: The service sector continues to grow and the percentage
of GDP accounted for by service industries keeps increasing. The
growth and importance of service to an economy is not just a
phenomenon of advanced economies, service is now a majority of the
world gross domestic products. However, the performance evaluation
process of new service development problems generally involves
uncertain and imprecise data. This paper presents a 2-tuple fuzzy
linguistic computing approach to dealing with heterogeneous
information and information loss problems while the processes of
subjective evaluation integration. The proposed method based on group
decision-making scenario to assist business managers in measuring
performance of new service development manipulates the
heterogeneity integration processes and avoids the information loss
effectively.
Abstract: Variable speed drives are growing and varying. Drives expanse depend on progress in different part of science like power system, microelectronic, control methods, and so on. Artificial intelligent contains hard computation and soft computation. Artificial intelligent has found high application in most nonlinear systems same as motors drive. Because it has intelligence like human but there are no sentimental against human like angriness and.... Artificial intelligent is used for various points like approximation, control, and monitoring. Because artificial intelligent techniques can use as controller for any system without requirement to system mathematical model, it has been used in electrical drive control. With this manner, efficiency and reliability of drives increase and volume, weight and cost of them decrease.
Abstract: Today, numerical simulation is a powerful tool to
solve various hydraulic engineering problems. The aim of this
research is numerical solutions of shallow water equations using
finite volume method for Simulations of dam break over wet and dry
bed. In order to solve Riemann problem, Roe-s approximate solver is
used. To evaluate numerical model, simulation was done in 1D and
2D states. In 1D state, two dam break test over dry bed (with and
without friction) were studied. The results showed that Structural
failure around the dam and damage to the downstream constructions
in bed without friction is more than friction bed. In 2D state, two
tests for wet and dry beds were done. Generally in wet bed case,
waves are propagated to canal sides but in dry bed it is not
significant. Therefore, damage to the storage facilities and
agricultural lands in wet bed case is more than in dry bed.
Abstract: In this paper we propose a method for vision systems
to consistently represent functional dependencies between different
visual routines along with relational short- and long-term knowledge
about the world. Here the visual routines are bound to visual properties
of objects stored in the memory of the system. Furthermore,
the functional dependencies between the visual routines are seen
as a graph also belonging to the object-s structure. This graph is
parsed in the course of acquiring a visual property of an object to
automatically resolve the dependencies of the bound visual routines.
Using this representation, the system is able to dynamically rearrange
the processing order while keeping its functionality. Additionally, the
system is able to estimate the overall computational costs of a certain
action. We will also show that the system can efficiently use that
structure to incorporate already acquired knowledge and thus reduce
the computational demand.
Abstract: Breastfeeding is an important concept in the maternal life of a woman. In this paper, we focus on exclusive breastfeeding. Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. This type of breastfeeding is very important during the first six months because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in Mauritius, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we give an overview of exclusive breastfeeding in Mauritius and the factors influencing it. We further analyze the local practices of exclusive breastfeeding using the Generalized Poisson regression model and the negative-binomial model since the data are over-dispersed.
Abstract: This paper discusses applications of a revolutionary
information technology, Geographic Information Systems (GIS), in
the field of the history of cartography by examples, including
assessing accuracy of early maps, establishing a database of places
and historical administrative units in history, integrating early maps
in GIS or digital images, and analyzing social, political, and
economic information related to production of early maps. GIS
provides a new mean to evaluate the accuracy of early maps. Four
basic steps using GIS for this type of study are discussed. In addition,
several historical geographical information systems are introduced.
These include China Historical Geographic Information Systems
(CHGIS), the United States National Historical Geographic
Information System (NHGIS), and the Great Britain Historical
Geographical Information System. GIS also provides digital means to
display and analyze the spatial information on the early maps or to
layer them with modern spatial data. How GIS relational data
structure may be used to analyze social, political, and economic
information related to production of early maps is also discussed in
this paper. Through discussion on these examples, this paper reveals
value of GIS applications in this field.
Abstract: In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.
Abstract: The purpose of this research is to determine the
knowledge and skills possessed by instructional design (ID)
practitioners in Malaysia. As ID is a relatively new field in the
country and there seems to be an absence of any studies on its
community of practice, the main objective of this research is to
discover the tasks and activities performed by ID practitioners in
educational and corporate organizations as suggested by the
International Board of Standards for Training, Performance and
Instruction. This includes finding out the ID models applied in the
course of their work. This research also attempts to identify the
barriers and issues as to why some ID tasks and activities are rarely
or never conducted. The methodology employed in this descriptive
study was a survey questionnaire sent to 30 instructional designers
nationwide. The results showed that majority of the tasks and
activities are carried out frequently enough but omissions do occur
due to reasons such as it being out of job scope, the decision was
already made at a higher level, and the lack of knowledge and skills.
Further investigations of a qualitative manner should be conducted
to achieve a more in-depth understanding of ID practices in
Malaysia
Abstract: Due to the recovering global economy, enterprises are
increasingly focusing on logistics. Investing in logistic measures for
a production generates a large potential for achieving a good starting
point within a competitive field. Unlike during the global economic
crisis, enterprises are now challenged with investing available capital
to maximize profits. In order to be able to create an informed and
quantifiably comprehensible basis for a decision, enterprises need an
adequate model for logistically and monetarily evaluating measures
in production. The Collaborate Research Centre 489 (SFB 489) at the
Institute for Production Systems (IFA) developed a Logistic
Information System which provides support in making decisions and
is designed specifically for the forging industry. The aim of a project
that has been applied for is to now transfer this process in order to
develop a universal approach to logistically and monetarily evaluate
measures in production.
Abstract: Information is power. Geographical information is an
emerging science that is advancing the development of knowledge to
further help in the understanding of the relationship of “place" with
other disciplines such as crime. The researchers used crime data for
the years 2004 to 2007 from the Baguio City Police Office to
determine the incidence and actual locations of crime hotspots.
Combined qualitative and quantitative research methodology was
employed through extensive fieldwork and observation, geographic
visualization with Geographic Information Systems (GIS) and Global
Positioning Systems (GPS), and data mining. The paper discusses
emerging geographic visualization and data mining tools and
methodologies that can be used to generate baseline data for
environmental initiatives such as urban renewal and rejuvenation.
The study was able to demonstrate that crime hotspots can be
computed and were seen to be occurring to some select places in the
Central Business District (CBD) of Baguio City. It was observed that
some characteristics of the hotspot places- physical design and milieu
may play an important role in creating opportunities for crime. A list
of these environmental attributes was generated. This derived
information may be used to guide the design or redesign of the urban
environment of the City to be able to reduce crime and at the same
time improve it physically.
Abstract: As networking has become popular, Web-learning
tends to be a trend while designing a tool. Moreover, five-axis
machining has been widely used in industry recently; however, it has
potential axial table colliding problems. Thus this paper aims at
proposing an efficient web-learning collision detection tool on
five-axis machining. However, collision detection consumes heavy
resource that few devices can support, thus this research uses a
systematic approach based on web knowledge to detect collision. The
methodologies include the kinematics analyses for five-axis motions,
separating axis method for collision detection, and computer
simulation for verification. The machine structure is modeled as STL
format in CAD software. The input to the detection system is the
g-code part program, which describes the tool motions to produce the
part surface. This research produced a simulation program with C
programming language and demonstrated a five-axis machining
example with collision detection on web site. The system simulates the
five-axis CNC motion for tool trajectory and detects for any collisions
according to the input g-codes and also supports high-performance
web service benefiting from C. The result shows that our method
improves 4.5 time of computational efficiency, comparing to the
conventional detection method.
Abstract: The paper presents an overview of environmental
issues that may be expected with nuclear desalination. The analysis
of coupling nuclear power with desalination plants indicates that
adverse marine impacts can be mitigated with alternative intake
designs or cooling systems. The atmospheric impact of desalination
may be greatly reduced through the coupling with nuclear power,
while maximizing the socio-economic benefit for both processes. The
potential for tritium contamination of the desalinated water was
reviewed. Experience with the systems and practices related to the
radiological quality of the product water, shows no examples of
cross-contamination. Furthermore, the indicators for the public
acceptance of nuclear desalination, as one of the most important
sustainability aspects of any such large project, show a positive trend.
From the data collected, a conclusion is made that nuclear
desalination should be supported by decision-makers.
Abstract: In this paper is investigated a possible
optimization of some linear algebra problems which can be
solved by parallel processing using the special arrays called
systolic arrays. In this paper are used some special types of
transformations for the designing of these arrays. We show
the characteristics of these arrays. The main focus is on
discussing the advantages of these arrays in parallel
computation of matrix product, with special approach to the
designing of systolic array for matrix multiplication.
Multiplication of large matrices requires a lot of
computational time and its complexity is O(n3 ). There are
developed many algorithms (both sequential and parallel) with
the purpose of minimizing the time of calculations. Systolic
arrays are good suited for this purpose. In this paper we show
that using an appropriate transformation implicates in finding
more optimal arrays for doing the calculations of this type.
Abstract: Emergence of smartphones brings to live the concept
of converged devices with the availability of web amenities. Such
trend also challenges the mobile devices manufactures and service
providers in many aspects, such as security on mobile phones,
complex and long time design flow, as well as higher development
cost. Among these aspects, security on mobile phones is getting more
and more attention. Microkernel based virtualization technology will
play a critical role in addressing these challenges and meeting mobile
market needs and preferences, since virtualization provides essential
isolation for security reasons and it allows multiple operating systems
to run on one processor accelerating development and cutting development
cost. However, virtualization benefits do not come for free.
As an additional software layer, it adds some inevitable virtualization
overhead to the system, which may decrease the system performance.
In this paper we evaluate and analyze the virtualization performance
cost of L4 microkernel based virtualization on a competitive mobile
phone by comparing the L4Linux, a para-virtualized Linux on top of
L4 microkernel, with the native Linux performance using lmbench
and a set of typical mobile phone applications.
Abstract: Complex engineering design problems consist of
numerous factors of varying criticalities. Considering fundamental features of design and inferior details alike will result in an extensive
waste of time and effort. Design parameters should be introduced gradually as appropriate based on their significance relevant to the
problem context. This motivates the representation of design parameters at multiple levels of an abstraction hierarchy. However, developing abstraction hierarchies is an area that is not well
understood. Our research proposes a novel hierarchical abstraction methodology to plan effective engineering designs and processes. It
provides a theoretically sound foundation to represent, abstract and stratify engineering design parameters and tasks according to causality and criticality. The methodology creates abstraction
hierarchies in a recursive and bottom-up approach that guarantees no
backtracking across any of the abstraction levels. The methodology consists of three main phases, representation, abstraction, and layering to multiple hierarchical levels. The effectiveness of the
developed methodology is demonstrated by a design problem.
Abstract: The response surface methodology (RSM) is a
collection of mathematical and statistical techniques useful in the
modeling and analysis of problems in which the dependent variable
receives the influence of several independent variables, in order to
determine which are the conditions under which should operate these
variables to optimize a production process. The RSM estimated a
regression model of first order, and sets the search direction using the
method of maximum / minimum slope up / down MMS U/D.
However, this method selects the step size intuitively, which can
affect the efficiency of the RSM. This paper assesses how the step
size affects the efficiency of this methodology. The numerical
examples are carried out through Monte Carlo experiments,
evaluating three response variables: efficiency gain function, the
optimum distance and the number of iterations. The results in the
simulation experiments showed that in response variables efficiency
and gain function at the optimum distance were not affected by the
step size, while the number of iterations is found that the efficiency if
it is affected by the size of the step and function type of test used.
Abstract: Speedups from mapping four real-life DSP
applications on an embedded system-on-chip that couples coarsegrained
reconfigurable logic with an instruction-set processor are
presented. The reconfigurable logic is realized by a 2-Dimensional
Array of Processing Elements. A design flow for improving
application-s performance is proposed. Critical software parts, called
kernels, are accelerated on the Coarse-Grained Reconfigurable
Array. The kernels are detected by profiling the source code. For
mapping the detected kernels on the reconfigurable logic a prioritybased
mapping algorithm has been developed. Two 4x4 array
architectures, which differ in their interconnection structure among
the Processing Elements, are considered. The experiments for eight
different instances of a generic system show that important overall
application speedups have been reported for the four applications.
The performance improvements range from 1.86 to 3.67, with an
average value of 2.53, compared with an all-software execution.
These speedups are quite close to the maximum theoretical speedups
imposed by Amdahl-s law.