Abstract: In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Abstract: In the current work, a numerical parametric study was
performed in order to model the fluid mechanics in the riser of a
bubbling fluidized bed (BFB). The gas-solid flow was simulated by
mean of a multi-fluid Eulerian model incorporating the kinetic theory
for solid particles. The bubbling fluidized bed was simulated two
dimensionally by mean of a Computational Fluid Dynamic (CFD)
commercial software package, Fluent. The effects of using different
inter-phase drag function (the drag model of Gidaspow, Syamlal and
O-Brien and the EMMS drag model) on the model predictions were
evaluated and compared. The results showed that the drag models of
Gidaspow and Syamlal and O-Brien overestimated the drag force for
the FCC particles and predicted a greater bed expansion in
comparison to the EMMS drag model.
Abstract: Optimization and control of reactive power
distribution in the power systems leads to the better operation of the
reactive power resources. Reactive power control reduces
considerably the power losses and effective loads and improves the
power factor of the power systems. Another important reason of the
reactive power control is improving the voltage profile of the power
system. In this paper, voltage and reactive power control using
Neural Network techniques have been applied to the 33 shines-
Tehran Electric Company. In this suggested ANN, the voltages of PQ
shines have been considered as the input of the ANN. Also, the
generators voltages, tap transformers and shunt compensators have
been considered as the output of ANN. Results of this techniques
have been compared with the Linear Programming. Minimization of
the transmission line power losses has been considered as the
objective function of the linear programming technique. The
comparison of the results of the ANN technique with the LP shows
that the ANN technique improves the precision and reduces the
computation time. ANN technique also has a simple structure and
this causes to use the operator experience.
Abstract: In this paper, we propose an easily computable proximity index for predicting voltage collapse of a load bus using only measured values of the bus voltage and power; Using these measurements a polynomial of fourth order is obtained by using LES estimation algorithms. The sum of the absolute values of the polynomial coefficient gives an idea of the critical bus. We demonstrate the applicability of our proposed method on 6 bus test system. The results obtained verify its applicability, as well as its accuracy and the simplicity. From this indicator, it is allowed to predict the voltage instability or the proximity of a collapse. Results obtained by the PV curve are compared with corresponding values by QV curves and are observed to be in close agreement.
Abstract: Purpose:This paper aims to gain insights to the influential factors of ERM adoptions by public listed firms in Malaysia. Findings:The two factors of financial leverage and auditor type were found to be significant influential factors for ERM adoption. In other words the findings indicated that firms with higher financial leverage and with a Big Four auditor are more likely to have a form of ERM framework in place. Originality/Value:Since there are relatively few studies conducted in this area and specially in developing economies like Malaysia, this study will broaden the scope of literature by providing novel empirical evidence.
Abstract: Virtualization-based server consolidation has been
proven to be an ideal technique to solve the server sprawl problem by
consolidating multiple virtualized servers onto a few physical servers
leading to improved resource utilization and return on investment. In
this paper, we solve this problem by using existing servers, which are
heterogeneous and diversely preferred by IT managers. Five practical
consolidation rules are introduced, and a decision model is proposed to
optimally allocate source services to physical target servers while
maximizing the average resource utilization and preference value. Our
model can be regarded as a multi-objective multi-dimension
bin-packing (MOMDBP) problem with constraints, which is strongly
NP-hard. An improved grouping generic algorithm (GGA) is
introduced for the problem. Extensive simulations were performed and
the results are given.
Abstract: In this study, a frame work for verification of famous seismic codes is utilized. To verify the seismic codes performance, damage quantity of RC frames is compared with the target performance. Due to the randomness property of seismic design and earthquake loads excitation, in this paper, fragility curves are developed. These diagrams are utilized to evaluate performance level of structures which are designed by the seismic codes. These diagrams further illustrate the effect of load combination and reduction factors of codes on probability of damage exceedance. Two types of structures; very high important structures with high ductility and medium important structures with intermediate ductility are designed by different seismic codes. The Results reveal that usually lower damage ratio generate lower probability of exceedance. In addition, the findings indicate that there are buildings with higher quantity of bars which they have higher probability of damage exceedance. Life-cycle cost analysis utilized for comparison and final decision making process.
Abstract: Since the last two decades, container transportation
system has been faced under increasing development. This fact
shows the importance of container transportation system as a key role
of container terminals to link between sea and land. Therefore, there
is a continuous need for the optimal use of equipment and facilities in
the ports. Regarding the complex structure of container ports, this
paper presents a simulation model that compares tow storage
strategies for storing containers in the yard. For this purpose, we
considered loading and unloading norm as an important criterion to
evaluate the performance of Shahid Rajaee container port. By
analysing the results of the model, it will be shown that using
marshalling yard policy instead of current storage system has a
significant effect on the performance level of the port and can
increase the loading and unloading norm up to 14%.
Abstract: In this paper we consider a nonlinear control design for
nonlinear systems by using two-stage formal linearization and twotype
LQ controls. The ordinary LQ control is designed on almost
linear region around the steady state point. On the other region,
another control is derived as follows. This derivation is based on
coordinate transformation twice with respect to linearization functions
which are defined by polynomials. The linearized systems can be
made up by using Taylor expansion considered up to the higher order.
To the resulting formal linear system, the LQ control theory is applied
to obtain another LQ control. Finally these two-type LQ controls
are smoothly united to form a single nonlinear control. Numerical
experiments indicate that this control show remarkable performances
for a nonlinear system.
Abstract: Searching similar documents and document
management subjects have important place in text mining. One of the
most important parts of similar document research studies is the
process of classifying or clustering the documents. In this study, a
similar document search approach that includes discussion of out the
case of belonging to multiple categories (multiple categories
problem) has been carried. The proposed method that based on Fuzzy
Similarity Classification (FSC) has been compared with Rocchio
algorithm and naive Bayes method which are widely used in text
mining. Empirical results show that the proposed method is quite
successful and can be applied effectively. For the second stage,
multiple categories vector method based on information of categories
regarding to frequency of being seen together has been used.
Empirical results show that achievement is increased almost two
times, when proposed method is compared with classical approach.
Abstract: Traditional optical networks are gradually evolving towards intelligent optical networks due to the need for faster bandwidth provisioning, protection and restoration of the network that can be accomplished with devices like optical switch, add drop multiplexer and cross connects. Since dense wavelength multiplexing forms the physical layer for intelligent optical networking, the roll of high speed all optical switch is important. This paper analyzes such an ultra-high speed polymer electro-optic switch. The performances of the 2x2 optical waveguide switch with rectangular, triangular and trapezoidal grating profiles on various device parameters are analyzed. The simulation result shows that trapezoidal grating is the optimized structure which has the coupling length of 81μm and switching voltage of 11V for the operating wavelength of 1550nm. The switching time for this proposed switch is 0.47 picosecond. This makes the proposed switch to be an important element in the intelligent optical network.
Abstract: Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.
Abstract: The study examined the influence of pay differentials on employee retention in the State Colleges of Education in the South-South Region of Nigeria. 275 subjects drawn from members of the wage negotiating teams in the Colleges were administered questionnaires constructed for study. Analysis of Variance revealed that the observed pay differentials significantly influenced retainership, f(5,269 = 6.223, P< 0.05). However, the Multiple Classification Analysis and Post-Hoc test indicated that employees in two of the Colleges with slightly lower and higher pay levels may probably remain with their employers while employees in other Colleges with the least and highest pay levels suggested quitting. Based on these observations, the influence of pay on employee retention seems inconclusive. Generally, employees in the colleges studied are dissatisfied with current pay levels. Management should confront these challenges by improving pay packages to encourage employees to remain and be dedicated to duty.
Abstract: Learning programming is difficult for many learners. Some researches have found that the main difficulty relates to cognitive load. Cognitive overload happens in programming due to the nature of the subject which is intrinisicly over-bearing on the working memory. It happens due to the complexity of the subject itself. The problem is made worse by the poor instructional design methodology used in the teaching and learning process. Various efforts have been proposed to reduce the cognitive load, e.g. visualization softwares, part-program method etc. Use of many computer based systems have also been tried to tackle the problem. However, little success has been made to alleviate the problem. More has to be done to overcome this hurdle. This research attempts at understanding how cognitive load can be managed so as to reduce the problem of overloading. We propose a mechanism to measure the cognitive load during pre instruction, post instruction and in instructional stages of learning. This mechanism is used to help the instruction. As the load changes the instruction is made to adapt itself to ensure cognitive viability. This mechanism could be incorporated as a sub domain in the student model of various computer based instructional systems to facilitate the learning of programming.
Abstract: A synchronous network-on-chip using wormhole packet switching
and supporting guaranteed-completion best-effort with low-priority (LP)
and high-priority (HP) wormhole packet delivery service is presented in
this paper. Both our proposed LP and HP message services deliver a good
quality of service in term of lossless packet completion and in-order message
data delivery. However, the LP message service does not guarantee minimal
completion bound. The HP packets will absolutely use 100% bandwidth of
their reserved links if the HP packets are injected from the source node with
maximum injection. Hence, the service are suitable for small size messages
(less than hundred bytes). Otherwise the other HP and LP messages, which
require also the links, will experience relatively high latency depending on the
size of the HP message. The LP packets are routed using a minimal adaptive
routing, while the HP packets are routed using a non-minimal adaptive routing
algorithm. Therefore, an additional 3-bit field, identifying the packet type,
is introduced in their packet headers to classify and to determine the type
of service committed to the packet. Our NoC prototypes have been also
synthesized using a 180-nm CMOS standard-cell technology to evaluate the
cost of implementing the combination of both services.
Abstract: Software-as-a-Service (SaaS) is a form of cloud
computing that relieves the user of the burden of hardware and
software installation and management. SaaS can be used at the course
level to enhance curricula and student experience. When cloud
computing and SaaS are included in educational literature, the focus
is typically on implementing administrative functions. Yet, SaaS can
make more immediate and substantial contributions to the technical
course content in educational offerings. This paper explores cloud
computing and SaaS, provides examples, reports on experiences
using SaaS to offer specialized software in courses, and analyzes the
advantages and disadvantages of using SaaS at the course level. The
paper contributes to the literature in higher education by analyzing
the major technical concepts, potential, and constraints for using
SaaS to deliver specialized software at the course level. Further it
may enable more educators and students to benefit from this
emerging technology.
Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: One of the mayor problems of programming a cruise
circuit is to decide which destinations to include and which don-t.
Thus a decision problem emerges, that might be solved using a linear
and goal programming approach. The problem becomes more
complex if several boats in the fleet must be programmed in a limited
schedule, trying their capacity matches best a seasonal demand and
also attempting to minimize the operation costs. Moreover, the
programmer of the company should consider the time of the
passenger as a limited asset, and would like to maximize its usage.
The aim of this work is to design a method in which, using linear and
goal programming techniques, a model to design circuits for the
cruise company decision maker can achieve an optimal solution
within the fleet schedule.
Abstract: We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.