Abstract: In this paper, an automated algorithm to estimate and remove the continuous baseline from measured spectra containing both continuous and discontinuous bands is proposed. The algorithm uses previous information contained in a Continuous Database Spectra (CDBS) to obtain a linear basis, with minimum number of sampled vectors, capable of representing a continuous baseline. The proposed algorithm was tested by using a CDBS of flame spectra where Principal Components Analysis and Non-negative Matrix Factorization were used to obtain linear bases. Thus, the radical emissions of natural gas, oil and bio-oil flames spectra at different combustion conditions were obtained. In order to validate the performance in the baseline estimation process, the Goodness-of-fit Coefficient and the Root Mean-squared Error quality metrics were evaluated between the estimated and the real spectra in absence of discontinuous emission. The achieved results make the proposed method a key element in the development of automatic monitoring processes strategies involving discontinuous spectral bands.
Abstract: Sensor relocation is to repair coverage holes caused by node failures. One way to repair coverage holes is to find redundant nodes to replace faulty nodes. Most researches took a long time to find redundant nodes since they randomly scattered redundant nodes around the sensing field. To record the precise position of sensor nodes, most researches assumed that GPS was installed in sensor nodes. However, high costs and power-consumptions of GPS are heavy burdens for sensor nodes. Thus, we propose a fast sensor relocation algorithm to arrange redundant nodes to form redundant walls without GPS. Redundant walls are constructed in the position where the average distance to each sensor node is the shortest. Redundant walls can guide sensor nodes to find redundant nodes in the minimum time. Simulation results show that our algorithm can find the proper redundant node in the minimum time and reduce the relocation time with low message complexity.
Abstract: In a complex project environment, project teams face
multi-dimensional communication problems that can ultimately lead
to project breakdown. Team Performance varies in Face-to-Face
(FTF) environment versus groups working remotely in a computermediated
communication (CMC) environment. A brief review of the
Input_Process_Output model suggested by James E. Driskell, Paul H.
Radtke and Eduardo Salas in “Virtual Teams: Effects of
Technological Mediation on Team Performance (2003)", has been
done to develop the basis of this research. This model theoretically
analyzes the effects of technological mediation on team processes,
such as, cohesiveness, status and authority relations, counternormative
behavior and communication. An empirical study
described in this paper has been undertaken to test the
“cohesiveness" of diverse project teams in a multi-national
organization. This study uses both quantitative and qualitative
techniques for data gathering and analysis. These techniques include
interviews, questionnaires for data collection and graphical data
representation for analyzing the collected data. Computer-mediated
technology may impact team performance because of difference in
cohesiveness among teams and this difference may be moderated by
factors, such as, the type of communication environment, the type of
task and the temporal context of the team. Based on the reviewed
model, sets of hypotheses are devised and tested. This research,
reports on a study that compared team cohesiveness among virtual
teams using CMC and non-CMC communication mediums. The
findings suggest that CMC can help virtual teams increase team
cohesiveness among their members, making CMC an effective
medium for increasing productivity and team performance.
Abstract: Many studies have focused on the nonlinear analysis
of electroencephalography (EEG) mainly for the characterization of
epileptic brain states. It is assumed that at least two states of the
epileptic brain are possible: the interictal state characterized by a
normal apparently random, steady-state EEG ongoing activity; and
the ictal state that is characterized by paroxysmal occurrence of
synchronous oscillations and is generally called in neurology, a
seizure.
The spatial and temporal dynamics of the epileptogenic process is
still not clear completely especially the most challenging aspects of
epileptology which is the anticipation of the seizure. Despite all the
efforts we still don-t know how and when and why the seizure
occurs. However actual studies bring strong evidence that the
interictal-ictal state transition is not an abrupt phenomena. Findings
also indicate that it is possible to detect a preseizure phase.
Our approach is to use the neural network tool to detect interictal
states and to predict from those states the upcoming seizure ( ictal
state). Analysis of the EEG signal based on neural networks is used
for the classification of EEG as either seizure or non-seizure. By
applying prediction methods it will be possible to predict the
upcoming seizure from non-seizure EEG.
We will study the patients admitted to the epilepsy monitoring
unit for the purpose of recording their seizures. Preictal, ictal, and
post ictal EEG recordings are available on such patients for analysis
The system will be induced by taking a body of samples then
validate it using another. Distinct from the two first ones a third body
of samples is taken to test the network for the achievement of
optimum prediction. Several methods will be tried 'Backpropagation
ANN' and 'RBF'.
Abstract: In a travelling wave thermoacoustic device, the
regenerator sandwiched between a pair of (hot and cold) heat
exchangers constitutes the so-called thermoacoustic core, where the
thermoacoustic energy conversion from heat to acoustic power takes
place. The temperature gradient along the regenerator caused by the
two heat exchangers excites and maintains the acoustic wave in the
resonator. The devices are called travelling wave thermoacoustic
systems because the phase angle difference between the pressure and
velocity oscillation is close to zero in the regenerator. This paper
presents the construction and testing of a thermoacoustic engine
equipped with a ceramic regenerator, made from a ceramic material
that is usually used as catalyst substrate in vehicles- exhaust systems,
with fine square channels (900 cells per square inch). The testing
includes the onset temperature difference (minimum temperature
difference required to start the acoustic oscillation in an engine), the
acoustic power output, thermal efficiency and the temperature profile
along the regenerator.
Abstract: In article the data of pre-clinical researches of Ramon
preparation is described. Antitumor activity of Ramon has been
studied on 19 strains of transplantated tumors of different
hystogenesis.
Abstract: This paper studies mixed-mode fracture mechanics in
rock based on experimental and numerical analyses. Experiments
were performed on sharp-cracked specimens using the modified
Arcan specimen test loading device. The modified Arcan specimen
test was, in association with a special loading device, an appropriate
apparatus for experimental mixed-mode fracture analysis. By
varying the loading angle from 0° to 90°, pure mode-I, pure mode-II
and a wide range of mixed-mode data were obtained experimentally.
Using the finite element results, correction factors applied to the
rectangular fracture specimen. By employing experimentally
measured critical loads and the aid of the finite element method,
mixed-mode fracture toughness for the limestone under consideration
determined.
Abstract: The validity of Herzberg-s Two-Factor Theory of
Motivation was tested empirically by surveying 2372 chemical fiber
employees in 2012. In the valid sample of 1875 respondents, the
degree of overall job satisfaction was more than moderate. The most
highly valued components of job satisfaction were: “corporate image,"
“collaborative working atmosphere," and “supervisor-s expertise";
whereas the lowest mean score was 34.65 for “job rotation and
promotion." The top three job retention options rated by the
participants were “good image of the enterprise," “good
compensation," and “workplace is close to my residence." The overall
evaluation of the level of thriving facilitation workplace reached
almost to “mostly agree." For those participants who chose at least
one motivator as their job retention options had significantly greater
job satisfaction than those who chose only hygiene factors as their
retention options. Therefore, Herzberg-s Two-Factor Theory of
Motivation was proven valid in this study.
Abstract: In this work a new offline signature recognition system
based on Radon Transform, Fractal Dimension (FD) and Support Vector Machine (SVM) is presented. In the first step, projections of
original signatures along four specified directions have been performed using radon transform. Then, FDs of four obtained
vectors are calculated to construct a feature vector for each
signature. These vectors are then fed into SVM classifier for recognition of signatures. In order to evaluate the effectiveness of
the system several experiments are carried out. Offline signature
database from signature verification competition (SVC) 2004 is used
during all of the tests. Experimental result indicates that the proposed method achieved high accuracy rate in signature recognition.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. This paper presents the problem of inaccurate lung
segmentation as observed in algorithms presented by researchers
working in the area of medical image analysis. The different lung
segmentation techniques have been tested using the dataset of 19
patients consisting of a total of 917 images. We obtained datasets of
11 patients from Ackron University, USA and of 8 patients from
AGA Khan Medical University, Pakistan. After testing the algorithms
against datasets, the deficiencies of each algorithm have been
highlighted.
Abstract: Supply Chain Management (SCM) is the integration
between manufacturer, transporter and customer in order to form one
seamless chain that allows smooth flow of raw materials, information
and products throughout the entire network that help in minimizing
all related efforts and costs. The main objective of this paper is to
develop a model that can accept a specified number of spare-parts
within the supply chain, simulating its inventory operations
throughout all stages in order to minimize the inventory holding
costs, base-stock, safety-stock, and to find the optimum quantity of
inventory levels, thereby suggesting a way forward to adapt some
factors of Just-In-Time to minimizing the inventory costs throughout
the entire supply chain. The model has been developed using Micro-
Soft Excel & Visual Basic in order to study inventory allocations in
any network of the supply chain. The application and reproducibility
of this model were tested by comparing the actual system that was
implemented in the case study with the results of the developed
model. The findings showed that the total inventory costs of the
developed model are about 50% less than the actual costs of the
inventory items within the case study.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: Policies that support entrepreneurship are keys to the
generation of new business. In Brazil, seed capital, installation of
technology parks, programs and zero interest financing, economic
subsidy as Program First Innovative Company (PRIME) are
examples of incentive policies. For the implementation of PRIME, in
particular the Brazilian Innovation Agency (FINEP) decentralized
operationalization so that business incubators could select innovative
projects. This paper analyzes the program PRIME Business Incubator
Center of the State of Sergipe (CISE) after calculating the mean and
standard deviation of the grades obtained by companies in the factors
of innovation, market potential, financial return economic, market
strategy and staff and application of the Mann-Whitney test.
Abstract: The shortest path routing problem is a multiobjective
nonlinear optimization problem with constraints. This problem has
been addressed by considering Quality of service parameters, delay
and cost objectives separately or as a weighted sum of both
objectives. Multiobjective evolutionary algorithms can find multiple
pareto-optimal solutions in one single run and this ability makes them
attractive for solving problems with multiple and conflicting
objectives. This paper uses an elitist multiobjective evolutionary
algorithm based on the Non-dominated Sorting Genetic Algorithm
(NSGA), for solving the dynamic shortest path routing problem in
computer networks. A priority-based encoding scheme is proposed
for population initialization. Elitism ensures that the best solution
does not deteriorate in the next generations. Results for a sample test
network have been presented to demonstrate the capabilities of the
proposed approach to generate well-distributed pareto-optimal
solutions of dynamic routing problem in one single run. The results
obtained by NSGA are compared with single objective weighting
factor method for which Genetic Algorithm (GA) was applied.
Abstract: The growth of the aquaculture industry has been
associated with negative environmental impacts through the
discharge of raw effluents into the adjacent receiving water bodies.
Macrophytes from natural saline lakes, which have adaptability to the
high salinity, can be suitable for saline effluent treatment. Eight
emergent species from natural saline area were planted in an
experimental gravel bed hydroponic mesocosm (GBH) which was
treated with effluent water from an intensive fish farm using
geothermal water. In order to examine the applicability of the
halophytes in treatment processes, we tested the relative efficacy of
total nitrogen (TN), total phosphorus (TP), potassium (K), sodium
(Na), magnesium (Mg) and calcium (Ca) removal for the saline
wastewater treatment. Four of the eight species, which were
Phragmites australis, Typha angustifolia, Glyceria maxima, Scirpus
lacustris spp. tabernaemontani could survive and contribute the
experimental treatment.
Abstract: This paper proposes a Particle Swarm Optimization
(PSO) based technique for the optimal allocation of Distributed
Generation (DG) units in the power systems. In this paper our aim is
to decide optimal number, type, size and location of DG units for
voltage profile improvement and power loss reduction in distribution
network. Two types of DGs are considered and the distribution load
flow is used to calculate exact loss. Load flow algorithm is combined
appropriately with PSO till access to acceptable results of this
operation. The suggested method is programmed under MATLAB
software. Test results indicate that PSO method can obtain better
results than the simple heuristic search method on the 30-bus and 33-
bus radial distribution systems. It can obtain maximum loss reduction
for each of two types of optimally placed multi-DGs. Moreover,
voltage profile improvement is achieved.
Abstract: Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.
Abstract: Increase in globalization of capital markets brings the
higher requirements on financial information provided for investors
who look for a highly comparable information. Paper deals with the
advantages and limitations of applying International Financial
Reporting Standards (IFRS) in the Czech Republic and Ukraine. As a
greatest limit for full adoption of IFRS shall be acknowledged the
strong connection of continental accounting to tax system and
enormous high administrative burden for IFRS appliers.