Abstract: The electrolyte stirring method of anodization etching
process for manufacturing porous silicon (PS) is reported in this work.
Two experimental setups of nature air stirring (PS-ASM) and
electrolyte stirring (PS-ESM) are employed to clarify the influence of
stirring mechanisms on electrochemical etching process. Compared to
traditional fabrication without any stirring apparatus (PS-TM), a large
plateau region of PS surface structure is obtained from samples with
both stirring methods by the 3D-profiler measurement. Moreover, the
light emission response is also improved by both proposed electrolyte
stirring methods due to the cycling force in electrolyte could
effectively enhance etch-carrier distribution while the electrochemical
etching process is made. According to the analysis of statistical
calculation of photoluminescence (PL) intensity, lower standard
deviations are obtained from PS-samples with studied stirring methods,
i.e. the uniformity of PL-intensity is effectively improved. The
calculated deviations of PL-intensity are 93.2, 74.5 and 64,
respectively, for PS-TM, PS-ASM and PS-ESM.
Abstract: The aim of this study is to discuss the relationship between tourist awareness of environmental issues and their own recreational behaviors in the Taipei Guandu Wetland. A total of 392 questionnaires were gathered for data analysis using descriptive statistics, t-testing, one-way analysis of variance (ANOVA) and least significant difference (LSD) post hoc comparisons. The results showed that most of the visitors there enjoying the beautiful scenery are 21 to 30 years old with a college education. The means and standard deviations indicate that tourists express a positive degree of cognition of environmental issues and recreational behaviors. They suggest that polluting the environment is harmful to the natural ecosystem and that the natural resources of ecotourism are fragile, as well as expressing a high degree of recognition of the need to protect wetlands. Most of respondents are cognizant of the regulations proposed by the Guandu Wetland administration which asks that users exercise self-control and follow recommended guidelines when traveling the wetland. There were significant differences in the degree of cognition related to the variables of age, number of visits and reasons for visiting. We found that most respondents with relatively high levels of education would like to learn more about the wetland and are supportive of its conservation.
Abstract: There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.
Abstract: One of the purposes of the robust method of
estimation is to reduce the influence of outliers in the data, on the
estimates. The outliers arise from gross errors or contamination from
distributions with long tails. The trimmed mean is a robust estimate.
This means that it is not sensitive to violation of distributional
assumptions of the data. It is called an adaptive estimate when the
trimming proportion is determined from the data rather than being
fixed a “priori-.
The main objective of this study is to find out the robustness
properties of the adaptive trimmed means in terms of efficiency, high
breakdown point and influence function. Specifically, it seeks to find
out the magnitude of the trimming proportion of the adaptive
trimmed mean which will yield efficient and robust estimates of the
parameter for data which follow a modified Weibull distribution with
parameter λ = 1/2 , where the trimming proportion is determined by a
ratio of two trimmed means defined as the tail length. Secondly, the
asymptotic properties of the tail length and the trimmed means are
also investigated. Finally, a comparison is made on the efficiency of
the adaptive trimmed means in terms of the standard deviation for the
trimming proportions and when these were fixed a “priori".
The asymptotic tail lengths defined as the ratio of two trimmed
means and the asymptotic variances were computed by using the
formulas derived. While the values of the standard deviations for the
derived tail lengths for data of size 40 simulated from a Weibull
distribution were computed for 100 iterations using a computer
program written in Pascal language.
The findings of the study revealed that the tail lengths of the
Weibull distribution increase in magnitudes as the trimming
proportions increase, the measure of the tail length and the adaptive
trimmed mean are asymptotically independent as the number of
observations n becomes very large or approaching infinity, the tail
length is asymptotically distributed as the ratio of two independent
normal random variables, and the asymptotic variances decrease as
the trimming proportions increase. The simulation study revealed
empirically that the standard error of the adaptive trimmed mean
using the ratio of tail lengths is relatively smaller for different values
of trimming proportions than its counterpart when the trimming
proportions were fixed a 'priori'.
Abstract: Molar excess Volumes, VE ijk and speeds of sound , uijk of 2-pyrrolidinone (i) + benzene or toluene (j) + ethanol (k) ternary mixture have been measured as a function of composition at 308.15 K. The observed speeds of sound data have been utilized to determine excess isentropic compressiblities, ( E S κ )ijk of ternary (i + j + k) mixtures. Molar excess volumes, VE ijk and excess isentropic compressibilities, ( E S κ )ijk data have fitted to the Redlich-Kister equation to calculate ternary adjustable parameters and standard deviations. The Moelywn-Huggins concept (Huggins in Polymer 12: 389-399, 1971) of connectivity between the surfaces of the constituents of binary mixtures has been extended to ternary mixtures (using the concept of a connectivity parameter of third degree of molecules, 3ξ , which inturn depends on its topology) to obtain an expression that describes well the measured VE ijk and ( E S κ )ijk data.
Abstract: The Institute of Product Development is dealing
with the development, design and dimensioning of micro components
and systems as a member of the Collaborative Research
Centre 499 “Design, Production and Quality Assurance of
Molded micro components made of Metallic and Ceramic Materials".
Because of technological restrictions in the miniaturization
of conventional manufacturing techniques, shape and
material deviations cannot be scaled down in the same proportion
as the micro parts, rendering components with relatively
wide tolerance fields. Systems that include such components
should be designed with this particularity in mind, often requiring
large clearance. On the end, the output of such systems
results variable and prone to dynamical instability. To save
production time and resources, every study of these effects
should happen early in the product development process and
base on computer simulation to avoid costly prototypes. A
suitable method is proposed here and exemplary applied to a
micro technology demonstrator developed by the CRC499. It
consists of a one stage planetary gear train in a sun-planet-ring
configuration, with input through the sun gear and output
through the carrier. The simulation procedure relies on ordinary
Multi Body Simulation methods and subsequently adds
other techniques to further investigate details of the system-s
behavior and to predict its response. The selection of the relevant
parameters and output functions followed the engineering
standards for regular sized gear trains. The first step is to
quantify the variability and to reveal the most critical points of
the system, performed through a whole-mechanism Sensitivity
Analysis. Due to the lack of previous knowledge about the system-s
behavior, different DOE methods involving small and
large amount of experiments were selected to perform the SA.
In this particular case the parameter space can be divided into
two well defined groups, one of them containing the gear-s profile
information and the other the components- spatial location.
This has been exploited to explore the different DOE techniques
more promptly. A reduced set of parameters is derived for
further investigation and to feed the final optimization process,
whether as optimization parameters or as external perturbation
collective. The 10 most relevant perturbation factors and 4 to 6
prospective variable parameters are considered in a new, simplified
model. All of the parameters are affected by the mentioned
production variability. The objective functions of interest
are based on scalar output-s variability measures, so the
problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development
path of a method to design and optimize complex micro
mechanisms composed of wide tolerated elements accounting
for the robustness and reliability of the systems- output.
Abstract: In this work, simulation algorithms for contact drying
of agitated particulate materials under vacuum and at atmospheric
pressure were developed. The implementation of algorithms gives a
predictive estimation of drying rate curves and bulk bed temperature
during contact drying. The calculations are based on the penetration
model to describe the drying process, where all process parameters
such as heat and mass transfer coefficients, effective bed properties,
gas and liquid phase properties are estimated with proper
correlations. Simulation results were compared with experimental
data from the literature. In both cases, simulation results were in good
agreement with experimental data. Few deviations were identified
and the limitations of the predictive capabilities of the models are
discussed. The programs give a good insight of the drying behaviour
of the analysed powders.
Abstract: The objective of the research was to study of foot
anthropometry of children aged 7-12 years in the South of Thailand Thirty-three dimensions were measured on 305 male and 295 female
subjects with 3 age ranges (7-12 years old). The instrumentation consists of four types of anthropometer, digital vernier caliper, digital
height gauge and measuring tape. The mean values and standard
deviations of average age, height, and weight of the male subjects were 9.52(±1.70) years, 137.80(±11.55) cm, and 37.57(±11.65) kg.
Female average age, height, and weight subjects were 9.53(±1.70) years, 137.88(±11.55) cm, and 34.90(±11.57) kg respectively. The
comparison of the 33 comparison measured anthropometric. Between
male and female subjects were sexual differences in size on women in almost all areas of significance (p
Abstract: Co-integration models the long-term, equilibrium relationship of two or more related financial variables. Even if cointegration is found, in the short run, there may be deviations from the long run equilibrium relationship. The aim of this work is to forecast these deviations using neural networks and create a trading strategy based on them. A case study is used: co-integration residuals from Australian Bank Bill futures are forecast and traded using various exogenous input variables combined with neural networks. The choice of the optimal exogenous input variables chosen for each neural network, undertaken in previous work [1], is validated by comparing the forecasts and corresponding profitability of each, using a trading strategy.
Abstract: With increasing complexity in electronic systems
there is a need for system level anomaly detection and fault isolation.
Anomaly detection based on vector similarity to a training set is used
in this paper through two approaches, one the preserves the original
information, Mahalanobis Distance (MD), and the other that
compresses the data into its principal components, Projection Pursuit
Analysis. These methods have been used to detect deviations in
system performance from normal operation and for critical parameter
isolation in multivariate environments. The study evaluates the
detection capability of each approach on a set of test data with known
faults against a baseline set of data representative of such “healthy"
systems.
Abstract: The response of growth and yield of rainfed-chickpea
to population density should be evaluated based on long-term
experiments to include the climate variability. This is achievable just
by simulation. In this simulation study, this evaluation was done by
running the CYRUS model for long-term daily weather data of five
locations in Iran. The tested population densities were 7 to 59 (with
interval of 2) stands per square meter. Various functions, including
quadratic, segmented, beta, broken linear, and dent-like functions,
were tested. Considering root mean square of deviations and linear
regression statistics [intercept (a), slope (b), and correlation
coefficient (r)] for predicted versus observed variables, the quadratic
and broken linear functions appeared to be appropriate for describing
the changes in biomass and grain yield, and in harvest index,
respectively. Results indicated that in all locations, grain yield tends
to show increasing trend with crowding the population, but
subsequently decreases. This was also true for biomass in five
locations. The harvest index appeared to have plateau state across
low population densities, but decreasing trend with more increasing
density. The turning point (optimum population density) for grain
yield was 30.68 stands per square meter in Isfahan, 30.54 in Shiraz,
31.47 in Kermanshah, 34.85 in Tabriz, and 32.00 in Mashhad. The
optimum population density for biomass ranged from 24.6 (in
Tabriz) to 35.3 stands per square meter (Mashhad). For harvest index
it varied between 35.87 and 40.12 stands per square meter.
Abstract: This paper proposes a new parameter identification
method based on Linear Fractional Transformation (LFT). It is
assumed that the target linear system includes unknown parameters.
The parameter deviations are separated from a nominal system via
LFT, and identified by organizing I/O signals around the separated
deviations of the real system. The purpose of this paper is to apply LFT
to simultaneously identify the parameter deviations in systems with
fewer outputs than unknown parameters. As a fundamental example,
this method is implemented to one degree of freedom vibratory system.
Via LFT, all physical parameters were simultaneously identified in this
system. Then, numerical simulations were conducted for this system to
verify the results. This study shows that all the physical parameters of a
system with fewer outputs than unknown parameters can be effectively
identified simultaneously using LFT.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.
Abstract: One of the main research directions in CAD/CAM
machining area is the reducing of machining time.
The feedrate scheduling is one of the advanced techniques that
allows keeping constant the uncut chip area and as sequel to keep
constant the main cutting force. They are two main ways for feedrate
optimization. The first consists in the cutting force monitoring, which
presumes to use complex equipment for the force measurement and
after this, to set the feedrate regarding the cutting force variation. The
second way is to optimize the feedrate by keeping constant the
material removal rate regarding the cutting conditions.
In this paper there is proposed a new approach using an extended
database that replaces the system model.
The feedrate scheduling is determined based on the identification
of the reconfigurable machine tool, and the feed value determination
regarding the uncut chip section area, the contact length between tool
and blank and also regarding the geometrical roughness.
The first stage consists in the blank and tool monitoring for the
determination of actual profiles. The next stage is the determination
of programmed tool path that allows obtaining the piece target
profile.
The graphic representation environment models the tool and blank
regions and, after this, the tool model is positioned regarding the
blank model according to the programmed tool path. For each of
these positions the geometrical roughness value, the uncut chip area
and the contact length between tool and blank are calculated. Each of
these parameters are compared with the admissible values and
according to the result the feed value is established.
We can consider that this approach has the following advantages:
in case of complex cutting processes the prediction of cutting force is
possible; there is considered the real cutting profile which has
deviations from the theoretical profile; the blank-tool contact length
limitation is possible; it is possible to correct the programmed tool
path so that the target profile can be obtained.
Applying this method, there are obtained data sets which allow the
feedrate scheduling so that the uncut chip area is constant and, as a
result, the cutting force is constant, which allows to use more
efficiently the machine tool and to obtain the reduction of machining
time.
Abstract: A transient heat transfer mathematical model for the
prediction of temperature distribution in the car body during primer
baking has been developed by considering the thermal radiation and
convection in the furnace chamber and transient heat conduction
governing equations in the car framework. The car cockpit is
considered like a structure with six flat plates, four vertical plates
representing the car doors and the rear and front panels. The other
two flat plates are the car roof and floor. The transient heat
conduction in each flat plate is modeled by the lumped capacitance
method. Comparison with the experimental data shows that the heat
transfer model works well for the prediction of thermal behavior of
the car body in the curing furnace, with deviations below 5%.
Abstract: This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Abstract: Existing methods in which the animation data of all frames are stored and reproduced as with vertex animation cannot be used in mobile device environments because these methods use large amounts of the memory. So 3D animation data reduction methods aimed at solving this problem have been extensively studied thus far and we propose a new method as follows. First, we find and remove frames in which motion changes are small out of all animation frames and store only the animation data of remaining frames (involving large motion changes). When playing the animation, the removed frame areas are reconstructed using the interpolation of the remaining frames. Our key contribution is to calculate the accelerations of the joints of individual frames and the standard deviations of the accelerations using the information of joint locations in the relevant 3D model in order to find and delete frames in which motion changes are small. Our methods can reduce data sizes by approximately 50% or more while providing quality which is not much lower compared to original animations. Therefore, our method is expected to be usefully used in mobile device environments or other environments in which memory sizes are limited.
Abstract: The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.
Abstract: In this study, solid phase micro-extraction (SPME)
was optimized to improve the sensitivity and accuracy in
formaldehyde determination for plywood panels. Further work has
been carried out to compare the newly developed technique with
existing method which reacts formaldehyde collected in desiccators
with acetyl acetone reagent (DC-AA). In SPME, formaldehyde was
first derivatized with O-(2,3,4,5,6 pentafluorobenzyl)-hydroxylamine
hydrochloride (PFBHA) and analysis was then performed by gas
chromatography in combination with mass spectrometry (GC-MS).
SPME data subjected to various wood species gave satisfactory
results, with relative standard deviations (RSDs) obtained in the
range of 3.1-10.3%. It was also well correlated with DC values,
giving a correlation coefficient, RSQ, of 0.959. The quantitative
analysis of formaldehyde by SPME was an alternative in wood
industry with great potential