Abstract: This study assessed fish marketing as panacea towards
sustainable agriculture in Ogun State, Nigeria. Multi-stage sampling
technique was used in the selection of 150 fish marketers for this
study. Descriptive statistics were used for the objectives while
Product Pearson Moment Correlation was used to test the hypothesis.
Result of the findings revealed that the mean age of the respondents
was 38.60 years. Majority (93.33%) of the respondents had
acceptable levels of formal education. Many (44.00%) of the
respondents had spent 1-5 years in fish marketing. The average
quantity of fish sold in a day was 94.10kg. However, efficient fish
marketing were hindered by inadequate processing equipment,
storage rooms and ice holding facilities (86.67%). There was a
significant relationship between socio-economic characteristics and
profit realized from fish marketing (p < 0.05). It was recommended
that storage and warehousing facilities should be provided to the fish
marketers in the study area.
Abstract: The objective of this study is to determine the thermal comfort among worker at Malaysian automotive industry. One critical manual assembly workstation had been chosen as a subject for the study. The human subjects for the study constitute operators at Body Assembly Station of the factory. The environment examined was the Relative Humidity (%), Airflow (m/s), Air Temperature (°C) and Radiant Temperature (°C) of the surrounding workstation area. The environmental factors were measured using Babuc apparatus, which is capable to measure simultaneously those mentioned environmental factors. The time series data of fluctuating level of factors were plotted to identify the significant changes of factors. Then thermal comfort of the workers were assessed by using ISO Standard 7730 Thermal sensation scale by using Predicted Mean Vote (PMV). Further Predicted percentage dissatisfied (PPD) is used to estimate the thermal comfort satisfaction of the occupant. Finally the PPD versus PMV were plotted to present the thermal comfort scenario of workers involved in related workstation. The result of PMV at the related industry is between 1.8 and 2.3, where PPD at that building is between 60% to 84%. The survey result indicated that the temperature more influenced comfort to the occupants
Abstract: The overall objective of this paper is to retrieve soil
surfaces parameters namely, roughness and soil moisture related to
the dielectric constant by inverting the radar backscattered signal
from natural soil surfaces.
Because the classical description of roughness using statistical
parameters like the correlation length doesn't lead to satisfactory
results to predict radar backscattering, we used a multi-scale
roughness description using the wavelet transform and the Mallat
algorithm. In this description, the surface is considered as a
superposition of a finite number of one-dimensional Gaussian
processes each having a spatial scale. A second step in this study
consisted in adapting a direct model simulating radar backscattering
namely the small perturbation model to this multi-scale surface
description. We investigated the impact of this description on radar
backscattering through a sensitivity analysis of backscattering
coefficient to the multi-scale roughness parameters.
To perform the inversion of the small perturbation multi-scale
scattering model (MLS SPM) we used a multi-layer neural network
architecture trained by backpropagation learning rule. The inversion
leads to satisfactory results with a relative uncertainty of 8%.
Abstract: Different techniques for estimating seasonal water
use from soil profile water depletion frequently do not account for
flux below the root zone. Shallow water table contribution to supply
crop water use may be important in arid and semi-arid regions.
Development of predictive root uptake models, under influence of
shallow water table makes it possible for planners to incorporate
interaction between water table and root zone into design of irrigation
projects. A model for obtaining soil moisture depletion from root
zone and water movement below it is discussed with the objective to
determine impact of shallow water table on seasonal moisture
depletion patterns under water table depth variation, up to the bottom
of root zone. The role of different boundary conditions has also been
considered. Three crops: Wheat (Triticum aestivum), Corn (Zea
mays) and Potato (Solanum tuberosum), common in arid & semi-arid
regions, are chosen for the study. Using experimentally obtained soil
moisture depletion values for potential soil moisture conditions,
moisture depletion patterns using a non linear root uptake model have
been obtained for different water table depths. Comparative analysis
of the moisture depletion patterns under these conditions show a wide
difference in percent depletion from different layers of root zone
particularly top and bottom layers with middle layers showing
insignificant variation in moisture depletion values. Moisture
depletion in top layer, when the water table rises to root zone
increases by 19.7%, 22.9% & 28.2%, whereas decrease in bottom
layer is 68.8%, 61.6% & 64.9% in case of wheat, corn & potato
respectively. The paper also discusses the causes and consequences
of increase in moisture depletion from top layers and exceptionally
high reduction in bottom layer, and the possible remedies for the
same. The numerical model developed for the study can be used to
help formulating irrigation strategies for areas where shallow
groundwater of questionable quality is an option for crop production.
Abstract: Most integrated inertial navigation systems (INS) and
global positioning systems (GPS) have been implemented using the
Kalman filtering technique with its drawbacks related to the need for
predefined INS error model and observability of at least four
satellites. Most recently, a method using a hybrid-adaptive network
based fuzzy inference system (ANFIS) has been proposed which is
trained during the availability of GPS signal to map the error
between the GPS and the INS. Then it will be used to predict the
error of the INS position components during GPS signal blockage.
This paper introduces a genetic optimization algorithm that is used to
update the ANFIS parameters with respect to the INS/GPS error
function used as the objective function to be minimized. The results
demonstrate the advantages of the genetically optimized ANFIS for
INS/GPS integration in comparison with conventional ANFIS
specially in the cases of satellites- outages. Coping with this problem
plays an important role in assessment of the fusion approach in land
navigation.
Abstract: The Wind Turbine Modeling in Wind Energy Conversion System (WECS) using Doubly-Fed Induction Generator (DFIG) PI Controller based design is presented. To study about the variable wind speed. The PI controller performs responding to the dynamic performance. The objective is to study the characteristic of wind turbine and finding the optimum wind speed suitable for wind turbine performance. This system will allow the specification setting (2.5MW). The output active power also corresponding same the input is given. And the reactive power produced by the wind turbine is regulated at 0 Mvar. Variable wind speed is optimum for drive train performance at 12.5 m/s (at maximum power coefficient point) from the simulation of DFIG by Simulink is described.
Abstract: Signature represents an individual characteristic of a
person which can be used for his / her validation. For such application
proper modeling is essential. Here we propose an offline signature
recognition and verification scheme which is based on extraction of
several features including one hybrid set from the input signature
and compare them with the already trained forms. Feature points
are classified using statistical parameters like mean and variance.
The scanned signature is normalized in slant using a very simple
algorithm with an intention to make the system robust which is
found to be very helpful. The slant correction is further aided by the
use of an Artificial Neural Network (ANN). The suggested scheme
discriminates between originals and forged signatures from simple
and random forgeries. The primary objective is to reduce the two
crucial parameters-False Acceptance Rate (FAR) and False Rejection
Rate (FRR) with lesser training time with an intension to make the
system dynamic using a cluster of ANNs forming a multiple classifier
system.
Abstract: Although current competitive challenges induced by today-s digital economy place their main emphasis on organizational knowledge, customer knowledge has been overlooked. On the other hand, the business community has finally begun to realize the important role customer knowledge can play in the organizational boundaries of the corporate arena. As a result, there is an emerging market for the tools and utilities whose objective is to provide the intelligence for knowledge sharing between the businesses and their customers. In this paper, we present a conceptual model of customer knowledge management by identifying and analyzing the existing tools in the market. The focus will be upon the emerging British dotcom industry whose customer based B2C behavior has been an influential part of the knowledge based intelligence tools in existence today.
Abstract: In this paper, based on steady-state models of Flexible
AC Transmission System (FACTS) devices, the sizing of static
synchronous series compensator (SSSC) controllers in transmission
network is formed as an optimization problem. The objective of this
problem is to reduce the transmission losses in the network. The
optimization problem is solved using particle swarm optimization
(PSO) technique. The Newton-Raphson load flow algorithm is
modified to consider the insertion of the SSSC devices in the
network. A numerical example, illustrating the effectiveness of the
proposed algorithm, is introduced. In addition, a novel model of a 3-
phase voltage source converter (VSC) that is suitable for series
connected FACTS a controller is introduced. The model is verified
by simulation using Power System Blockset (PSB) and Simulink
software.
Abstract: Near-infrared (NIR) spectroscopy is a widely used
method for material identification for laboratory and industrial applications.
While standard spectrometers only allow measurements at
one sampling point at a time, NIR Spectral Imaging techniques can
measure, in real-time, both the size and shape of an object as well as
identify the material the object is made of. The online classification
and sorting of recovered paper with NIR Spectral Imaging (SI)
is used with success in the paper recycling industry throughout
Europe. Recently, the globalisation of the recycling material streams
caused that water-based flexographic-printed newspapers mainly from
UK and Italy appear also in central Europe. These flexo-printed
newspapers are not sufficiently de-inkable with the standard de-inking
process originally developed for offset-printed paper. This de-inking
process removes the ink from recovered paper and is the fundamental
processing step to produce high-quality paper from recovered paper.
Thus, the flexo-printed newspapers are a growing problem for the
recycling industry as they reduce the quality of the produced paper
if their amount exceeds a certain limit within the recovered paper
material.
This paper presents the results of a research project for the
development of an automated entry inspection system for recovered
paper that was jointly conducted by CTR AG (Austria) and PTS
Papiertechnische Stiftung (Germany). Within the project an NIR
SI prototype for the identification of flexo-printed newspaper has
been developed. The prototype can identify and sort out flexoprinted
newspapers in real-time and achieves a detection accuracy
for flexo-printed newspaper of over 95%. NIR SI, the technology the
prototype is based on, allows the development of inspection systems
for incoming goods in a paper production facility as well as industrial
sorting systems for recovered paper in the recycling industry in the
near future.
Abstract: Process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work we will focus on investigating process complexity. We define process complexity as the degree to which a business process is difficult to analyze, understand or explain. One way to analyze a process- complexity is to use a process control-flow complexity measure. In this paper, an attempt has been made to evaluate the control-flow complexity measure in terms of Weyuker-s properties. Weyuker-s properties must be satisfied by any complexity measure to qualify as a good and comprehensive one.
Abstract: An optimal control of Reverse Osmosis (RO) plant is
studied in this paper utilizing the auto tuning concept in conjunction
with PID controller. A control scheme composing an auto tuning
stochastic technique based on an improved Genetic Algorithm (GA) is
proposed. For better evaluation of the process in GA, objective
function defined newly in sense of root mean square error has been
used. Also in order to achieve better performance of GA, more
pureness and longer period of random number generation in operation
are sought. The main improvement is made by replacing the uniform
distribution random number generator in conventional GA technique
to newly designed hybrid random generator composed of Cauchy
distribution and linear congruential generator, which provides
independent and different random numbers at each individual steps in
Genetic operation. The performance of newly proposed GA tuned
controller is compared with those of conventional ones via simulation.
Abstract: Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Abstract: One of the aims of the paper is to make a comparison
of experimental results with numerical simulation for a side cooler.
Specifically, it was the amount of air to be delivered by the side
cooler with fans running at 100%. This integral value was measured
and evaluated within the plane parallel to the front side of the side
cooler at a distance of 20mm from the front side. The flow field
extending from the side cooler to the space was also evaluated.
Another objective was to address the contribution of evaluated values
to the increase of data center energy consumption.
Abstract: Team efficacy beliefs show promise in enhancing
team performance. Using a model-based quantitative research design,
we investigated the antecedents and performance consequences of
generalized team efficacy (potency) in a sample of 56 capital projects
executed by 15 Fortune 500 companies in the process industries.
Empirical analysis of our field survey identified that generalized
team efficacy beliefs were positively associated with an objective
measure of project cost performance. Regression analysis revealed
that team competence, empowering leadership, and performance
feedback all predicted generalized team efficacy beliefs. Tests of
mediation revealed that generalized team efficacy fully mediated
between these three inputs and project cost performance.
Abstract: Collaborative networked learning (hereafter CNL)
was first proposed by Charles Findley in his work “Collaborative
networked learning: online facilitation and software support" as part
of instructional learning for the future of the knowledge worker. His
premise was that through electronic dialogue learners and experts
could interactively communicate within a contextual framework to
resolve problems, and/or to improve product or process knowledge.
Collaborative learning has always been the forefront of educational
technology and pedagogical research, but not in the mainstream of
operations management. As a result, there is a large disparity in the
study of CNL, and little is known about the antecedents of network
collaboration and sharing of information among diverse employees in
the manufacturing environment. This paper presents a model to
bridge the gap between theory and practice. The objective is that
manufacturing organizations will be able to accelerate organizational
learning and sharing of information through various collaborative
Abstract: The objective of this paper is to develop a neural
network-based residual generator to detect the fault in the actuators
for a specific communication satellite in its attitude control system
(ACS). First, a dynamic multilayer perceptron network with dynamic
neurons is used, those neurons correspond a second order linear
Infinite Impulse Response (IIR) filter and a nonlinear activation
function with adjustable parameters. Second, the parameters from the
network are adjusted to minimize a performance index specified by
the output estimated error, with the given input-output data collected
from the specific ACS. Then, the proposed dynamic neural network
is trained and applied for detecting the faults injected to the wheel,
which is the main actuator in the normal mode for the communication
satellite. Then the performance and capabilities of the proposed
network were tested and compared with a conventional model-based
observer residual, showing the differences between these two
methods, and indicating the benefit of the proposed algorithm to
know the real status of the momentum wheel. Finally, the application
of the methods in a satellite ground station is discussed.
Abstract: Using maximal consistent blocks of tolerance relation
on the universe in incomplete decision table, the concepts of join block
and meet block are introduced and studied. Including tolerance class,
other blocks such as tolerant kernel and compatible kernel of an object
are also discussed at the same time. Upper and lower approximations
based on those blocks are also defined. Default definite decision rules
acquired from incomplete decision table are proposed in the paper. An
incremental algorithm to update default definite decision rules is
suggested for effective mining tasks from incomplete decision table
into which data is appended. Through an example, we demonstrate
how default definite decision rules based on maximal consistent
blocks, join blocks and meet blocks are acquired and how optimization
is done in support of discernibility matrix and discernibility function
in the incomplete decision table.
Abstract: Duplicated region detection is a technical method to
expose copy-paste forgeries on digital images. Copy-paste is one
of the common types of forgeries to clone portion of an image
in order to conceal or duplicate special object. In this type of
forgery detection, extracting robust block feature and also high
time complexity of matching step are two main open problems.
This paper concentrates on computational time and proposes a local
block matching algorithm based on block clustering to enhance time
complexity. Time complexity of the proposed algorithm is formulated
and effects of two parameter, block size and number of cluster, on
efficiency of this algorithm are considered. The experimental results
and mathematical analysis demonstrate this algorithm is more costeffective
than lexicographically algorithms in time complexity issue
when the image is complex.
Abstract: This paper aims at improving web server performance
by establishing a middleware layer between web and database
servers, which minimizes the overload on the database server. A
middleware system has been developed as a service mainly to
improve the performance. This system manages connection accesses
in a way that would result in reducing the overload on the database
server. In addition to the connection management, this system acts as
an object-oriented model for best utilization of operating system
resources. A web developer can use this Service Broker to improve
web server performance.