Abstract: Interactive CAD systems have to allocate and
deallocate memory frequently. Frequent memory allocation and
deallocation can play a significant role in degrading application
performance. An application may use memory in a very specific way
and pay a performance penalty for functionality it does not need. We
could counter that by developing specialized memory managers.
Abstract: The Minimal Residual (MR) is modified for adaptive
filtering application. Three forms of MR based algorithm are
presented: i) the low complexity SPCG, ii) MREDSI, and iii)
MREDSII. The low complexity is a reduced complexity version of a
previously proposed SPCG algorithm. Approximations introduced
reduce the algorithm to an LMS type algorithm, but, maintain the
superior convergence of the SPCG algorithm. Both MREDSI and
MREDSII are MR based methods with Euclidean direction of search.
The choice of Euclidean directions is shown via simulation to give
better misadjustment compared to their gradient search counterparts.
Abstract: In this paper we present an approach for 3D face
recognition based on extracting principal components of range
images by utilizing modified PCA methods namely 2DPCA and
bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing
stage was implemented on the images to smooth them using median
and Gaussian filtering. In the normalization stage we locate the nose
tip to lay it at the center of images then crop each image to a standard
size of 100*100. In the face recognition stage we extract the principal
component of each image using both 2DPCA and (2D) 2 PCA.
Finally, we use Euclidean distance to measure the minimum distance
between a given test image to the training images in the database. We
also compare the result of using both methods. The best result
achieved by experiments on a public face database shows that 83.3
percent is the rate of face recognition for a random facial expression.
Abstract: The contribution deals with analysis of identity style
at adolescents (N=463) at the age from 16 to 19 (the average age is
17,7 years). We used the Identity Style Inventory by Berzonsky,
distinguishing three basic, measured identity styles: informational,
normative, diffuse-avoidant identity style and also commitment. The
informational identity style influencing on personal adaptability,
coping strategies, quality of life and the normative identity style, it
means the style in which an individual takes on models of authorities
at self-defining were found to have the highest representation in the
studied group of adolescents by higher scores at girls in comparison
with boys. The normative identity style positively correlates with the
informational identity style. The diffuse-avoidant identity style was
found to be positively associated with maladaptive decisional
strategies, neuroticism and depressive reactions. There is the style,
in which the individual shifts aside defining his personality. In our
research sample the lowest score represents it and negatively
correlates with commitment, it means with coping strategies, thrust in
oneself and the surrounding world. The age of adolescents did not
significantly differentiate representation of identity style. We were
finding the model, in which informational and normative identity
style had positive relationship and the informational and diffuseavoidant
style had negative relationship, which were determinated
with commitment. In the same time the commitment is influenced
with other outside factors.
Abstract: In this paper, transversal vibration of buried pipelines
during loading induced by underground explosions is analyzed. The
pipeline is modeled as an infinite beam on an elastic foundation, so
that soil-structure interaction is considered by means of transverse
linear springs along the pipeline. The pipeline behavior is assumed to
be ideal elasto-plastic which an ultimate strain value limits the plastic
behavior. The blast loading is considered as a point load, considering
the affected length at some point of the pipeline, in which the
magnitude decreases exponentially with time. A closed-form solution
for the quasi-static problem is carried out for both elastic and elasticperfect
plastic behaviors of pipe materials. At the end, a comparative
study on steel and polyethylene pipes with different sizes buried in
various soil conditions, affected by a predefined underground
explosion is conducted, in which effect of each parameter is
discussed.
Abstract: This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. In this paper, we
investigated three approaches to build a meta-classifier in order to
increase the classification accuracy. The basic idea is to learn a metaclassifier
to optimally select the best component classifier for each
data point. The experimental results show that combining classifiers
can significantly improve the accuracy of classification and that our
meta-classification strategy gives better results than each individual
classifier. For 7083 Reuters text documents we obtained a
classification accuracies up to 92.04%.
Abstract: This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.
Abstract: Supplier selection is a multi criteria decision-making process that comprises tangible and intangible factors. The majority of previous supplier selection techniques do not consider strategic perspective. Besides, uncertainty is one of the most important obstacles in supplier selection. For the first, time in this paper, the idea of the algorithm " Knapsack " is used to select suppliers Moreover, an attempt has to be made to take the advantage of a simple numerical method for solving model .This is an innovation to resolve any ambiguity in choosing suppliers. This model has been tried in the suppliers selected in a competitive environment and according to all desired standards of quality and quantity to show the efficiency of the model, an industry sample has been uses.
Abstract: The bypass exhaust system of a 160 MW combined cycle has been modeled and analyzed using numerical simulation in 2D prospective. Analysis was carried out using the commercial numerical simulation software, FLUENT 6.2. All inputs were based on the technical data gathered from working conditions of a Siemens V94.2 gas turbine, installed in the Yazd power plant. This paper deals with reduction of pressure drop in bypass exhaust system using turning vanes mounted in diverter box in order to alleviate turbulent energy dissipation rate above diverter box. The geometry of such turning vanes has been optimized based on the flow pattern at diverter box inlet. The results show that the use of optimized turning vanes in diverter box can improve the flow pattern and eliminate vortices around sharp edges just before the silencer. Furthermore, this optimization could decrease the pressure drop in bypass exhaust system and leads to higher plant efficiency.
Abstract: In order to assess optical fiber reliability in different environmental and stress conditions series of testing are performed simulating overlapping of chemical and mechanical controlled varying factors. Each series of testing may be compared using statistical processing: i.e. Weibull plots. Due to the numerous data to treat, a software application has appeared useful to interpret selected series of experiments in function of envisaged factors. The current paper presents a software application used in the storage, modelling and interpretation of experimental data gathered from optical fibre testing. The present paper strictly deals with the software part of the project (regarding the modelling, storage and processing of user supplied data).
Abstract: Nothing that an effective cure for infertility happens
when we can find a unique solution, a great deal of study has been
done in this field and this is a hot research subject for to days study.
So we could analyze the men-s seaman and find out about fertility
and infertility and from this find a true cure for this, since this will be
a non invasive and low risk procedure, it will be greatly welcomed.
In this research, the procedure has been based on few Algorithms
enhancement and segmentation of images which has been done on
the images taken from microscope in different fertility institution and
have obtained a suitable result from the computer images which in
turn help us to distinguish these sperms from fluids and its
surroundings.
Abstract: Acid rain occurs when sulphur dioxide (SO2) and
nitrogen oxides (Nox) gases react in the atmosphere with water,
oxygen, and other chemicals to form various acidic compounds. The
result is a mild solution of sulfuric acid and nitric acid. Soil has a
greater buffering capacity than aquatic systems. However excessive
amount of acids introduced by acid rains may disturb the entire soil
chemistry. Acidity and harmful action of toxic elements damage
vegetation while susceptible microbial species are eliminated. In
present study, the effects of simulated sulphuric acid and nitric acid
rains were investigated on crop Glycine max. The effect of acid rain
on change in soil fertility was detected in which pH of control sample
was 6.5 and pH of 1%H2SO4 and 1%HNO3 were 3.5. Nitrogen nitrate
in soil was high in 1% HNO3 treated soil & Control sample.
Ammonium nitrogen in soil was low in 1% HNO3 & H2SO4 treated
soil. Ammonium nitrogen was medium in control and other samples.
The effect of acid rain on seed germination on 3rd day of germination
control sample growth was 7 cm, 0.1% HNO3 was 8cm, and 0.001%
HNO3 & 0.001% H2SO4 was 6cm each. On 10th day fungal growth
was observed in 1% and 0.1%H2SO4 concentrations, when all plants
were dead. The effect of acid rain on crop productivity was
investigated on 3rd day roots were developed in plants. On12th day
Glycine max showed more growth in 0.1% HNO3, 0.001% HNO3 and
0.001% H2SO4 treated plants growth were same as compare to control
plants. On 20th day development of discoloration of plant pigments
were observed on acid treated plants leaves. On 38th day, 0.1, 0.001%
HNO3 and 0.1, 0.001% H2SO4 treated plants and control plants were
showing flower growth. On 42th day, acid treated Glycine max variety
and control plants were showed seeds on plants. In Glycine max
variety 0.1, 0.001% H2SO4, 0.1, 0.001% HNO3 treated plants were
dead on 46th day and fungal growth was observed. The toxicological
study was carried out on Glycine max plants exposed to 1% HNO3
cells were damaged more than 1% H2SO4. Leaf sections exposed to
0.001% HNO3 & H2SO4 showed less damaged of cells and
pigmentation observed in entire slide when compare with control
plant. The soil analysis was done to find microorganisms in HNO3 &
H2SO4 treated Glycine max and control plants. No microorganism
growth was observed in 1% HNO3 & H2SO4 but control plant showed
microbial growth.
Abstract: Land degradation is of concern in many countries. People more and more must address the problems associated with the degradation of soil properties due to man. Increasingly, organic soil amendments, such as compost are being examined for their potential use in soil restoration and for preventing soil erosion. In the Czech Republic, compost is the most used to improve soil structure and increase the content of soil organic matter. Land reclamation / restoration is one of the ways to evaluate industrially produced compost because Czech farmers are not willing to use compost as organic fertilizer. The most common use of reclamation substrates in the Czech Republic is for the rehabilitation of landfills and contaminated sites.
This paper deals with the influence of reclamation substrates (RS) with different proportions of compost and sand on selected soil properties–chemical characteristics, nitrogen bioavailability, leaching of mineral nitrogen, respiration activity and plant biomass production. Chemical properties vary proportionally with addition of compost and sand to the control variant (topsoil). The highest differences between the variants were recorded in leaching of mineral nitrogen (varies from 1.36mg dm-3 in C to 9.09mg dm-3). Addition of compost to soil improves conditions for plant growth in comparison with soil alone. However, too high addition of compost may have adverse effects on plant growth. In addition, high proportion of compost increases leaching of mineral N. Therefore, mixture of 70% of soil with 10% of compost and 20% of sand may be recommended as optimal composition of RS.
Abstract: Transmission network expansion planning (TNEP) is
a basic part of power system planning that determines where, when
and how many new transmission lines should be added to the
network. Up till now, various methods have been presented to solve
the static transmission network expansion planning (STNEP)
problem. But in all of these methods, transmission expansion
planning considering network adequacy restriction has not been
investigated. Thus, in this paper, STNEP problem is being studied
considering network adequacy restriction using discrete particle
swarm optimization (DPSO) algorithm. The goal of this paper is
obtaining a configuration for network expansion with lowest
expansion cost and a specific adequacy. The proposed idea has been
tested on the Garvers network and compared with the decimal
codification genetic algorithm (DCGA). The results show that the
network will possess maximum efficiency economically. Also, it is
shown that precision and convergence speed of the proposed DPSO
based method for the solution of the STNEP problem is more than
DCGA approach.
Abstract: There are some existing Java benchmarks, application benchmarks as well as micro benchmarks or mixture both of them,such as: Java Grande, Spec98, CaffeMark, HBech, etc. But none of them deal with behaviors of multi tasks operating systems. As a result, the achieved outputs are not satisfied for performance evaluation engineers. Behaviors of multi tasks operating systems are based on a schedule management which is employed in these systems. Different processes can have different priority to share the same resources. The time is measured by estimating from applications started to it is finished does not reflect the real time value which the system need for running those programs. New approach to this problem should be done. Having said that, in this paper we present a new Java benchmark, named FHOJ benchmark, which directly deals with multi tasks behaviors of a system. Our study shows that in some cases, results from FHOJ benchmark are far more reliable in comparison with some existing Java benchmarks.
Abstract: Problems on algebraical polynomials appear in many fields of mathematics and computer science. Especially the task of determining the roots of polynomials has been frequently investigated.Nonetheless, the task of locating the zeros of complex polynomials is still challenging. In this paper we deal with the location of zeros of univariate complex polynomials. We prove some novel upper bounds for the moduli of the zeros of complex polynomials. That means, we provide disks in the complex plane where all zeros of a complex polynomial are situated. Such bounds are extremely useful for obtaining a priori assertations regarding the location of zeros of polynomials. Based on the proven bounds and a test set of polynomials, we present an experimental study to examine which bound is optimal.
Abstract: While OCD is one of the most commonly occurring
psychiatric conditions experienced by older adults, there is a paucity
of research conducted into the treatment of older adults with OCD.
This case study represents the first published investigation of a
cognitive treatment for geriatric OCD. It describes the successful
treatment of an 86-year old man with a 63-year history of OCD using
Danger Ideation Reduction Therapy (DIRT). The client received 14
individual, 50-minute treatment sessions of DIRT over 13 weeks.
Clinician-based Y-BOCS scores reduced 84% from 25 (severe) at
pre-treatment, to 4 (subclinical) at 6-month post-treatment follow-up
interview, demonstrating the efficacy of DIRT for this client. DIRT
may have particular advantages over ERP and pharmacological
approaches, however further research is required in older adults with
OCD.
Abstract: Distant-talking voice-based HCI system suffers from
performance degradation due to mismatch between the acoustic
speech (runtime) and the acoustic model (training). Mismatch is
caused by the change in the power of the speech signal as observed at
the microphones. This change is greatly influenced by the change in
distance, affecting speech dynamics inside the room before reaching
the microphones. Moreover, as the speech signal is reflected, its
acoustical characteristic is also altered by the room properties. In
general, power mismatch due to distance is a complex problem. This
paper presents a novel approach in dealing with distance-induced
mismatch by intelligently sensing instantaneous voice power variation
and compensating model parameters. First, the distant-talking speech
signal is processed through microphone array processing, and the
corresponding distance information is extracted. Distance-sensitive
Gaussian Mixture Models (GMMs), pre-trained to capture both
speech power and room property are used to predict the optimal
distance of the speech source. Consequently, pre-computed statistic
priors corresponding to the optimal distance is selected to correct
the statistics of the generic model which was frozen during training.
Thus, model combinatorics are post-conditioned to match the power
of instantaneous speech acoustics at runtime. This results to an
improved likelihood in predicting the correct speech command at
farther distances. We experiment using real data recorded inside two
rooms. Experimental evaluation shows voice recognition performance
using our method is more robust to the change in distance compared
to the conventional approach. In our experiment, under the most
acoustically challenging environment (i.e., Room 2: 2.5 meters), our
method achieved 24.2% improvement in recognition performance
against the best-performing conventional method.
Abstract: Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.