Abstract: In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.
Abstract: In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Abstract: For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach in order to make a solver quickly. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem. It is noted that our formal modeling language is not intend for providing an efficient notation for data mining application, but for facilitating a designer who develops solvers in machine learning.
Abstract: Malay Folk Literature in early childhood education
served as an important agent in child development that involved
emotional, thinking and language aspects. Up to this moment not
much research has been carried out in Malaysia particularly in the
teaching and learning aspects nor has there been an effort to publish
“big books." Hence this article will discuss the stance taken by
university undergraduate students, teachers and parents in evaluating
Malay Folk Literature in early childhood education to be used as big
books. The data collated and analyzed were taken from 646
respondents comprising 347 undergraduates and 299 teachers. Results
of the study indicated that Malay Folk Literature can be absorbed into
teaching and learning for early childhood with a mean of 4.25 while it
can be in big books with a mean of 4.14. Meanwhile the highest mean
value required for placing Malay Folk Literature genre as big books in
early childhood education rests on exemplary stories for
undergraduates with mean of 4.47; animal fables for teachers with a
mean of 4.38. The lowest mean value of 3.57 is given to lipurlara
stories. The most popular Malay Folk Literature found suitable for
early children is Sang Kancil and the Crocodile, followed by Bawang
Putih Bawang Merah. Pak Padir, Legends of Mahsuri, Origin of
Malacca, and Origin of Rainbow are among the popular stories as
well. Overall the undergraduates show a positive attitude toward all
the items compared to teachers. The t-test analysis has revealed a non
significant relationship between the undergraduate students and
teachers with all the items for the teaching and learning of Malay Folk
Literature.
Abstract: Several studies have been carried out, using various techniques, including neural networks, to discriminate vigilance states in humans from electroencephalographic (EEG) signals, but we are still far from results satisfactorily useable results. The work presented in this paper aims at improving this status with regards to 2 aspects. Firstly, we introduce an original procedure made of the association of two neural networks, a self organizing map (SOM) and a learning vector quantization (LVQ), that allows to automatically detect artefacted states and to separate the different levels of vigilance which is a major breakthrough in the field of vigilance. Lastly and more importantly, our study has been oriented toward real-worked situation and the resulting model can be easily implemented as a wearable device. It benefits from restricted computational and memory requirements and data access is very limited in time. Furthermore, some ongoing works demonstrate that this work should shortly results in the design and conception of a non invasive electronic wearable device.
Abstract: Neural networks are well known for their ability to
model non linear functions, but as statistical methods usually does,
they use a no parametric approach thus, a priori knowledge is not
obvious to be taken into account no more than the a posteriori
knowledge. In order to deal with these problematics, an original way
to encode the knowledge inside the architecture is proposed. This
method is applied to the problem of the evapotranspiration inside
karstic aquifer which is a problem of huge utility in order to deal
with water resource.
Abstract: As a method of expanding a higher-order tensor data to tensor products of vectors we have proposed the Third-order Orthogonal Tensor Product Expansion (3OTPE) that did similar expansion as Higher-Order Singular Value Decomposition (HOSVD). In this paper we provide a computation algorithm to improve our previous method, in which SVD is applied to the matrix that constituted by the contraction of original tensor data and one of the expansion vector obtained. The residual of the improved method is smaller than the previous method, truncating the expanding tensor products to the same number of terms. Moreover, the residual is smaller than HOSVD when applying to color image data. It is able to be confirmed that the computing time of improved method is the same as the previous method and considerably better than HOSVD.
Abstract: In this paper we present an efficient approach for the prediction of two sunspot-related time series, namely the Yearly Sunspot Number and the IR5 Index, that are commonly used for monitoring solar activity. The method is based on exploiting partially recurrent Elman networks and it can be divided into three main steps: the first one consists in a “de-rectification" of the time series under study in order to obtain a new time series whose appearance, similar to a sum of sinusoids, can be modelled by our neural networks much better than the original dataset. After that, we normalize the derectified data so that they have zero mean and unity standard deviation and, finally, train an Elman network with only one input, a recurrent hidden layer and one output using a back-propagation algorithm with variable learning rate and momentum. The achieved results have shown the efficiency of this approach that, although very simple, can perform better than most of the existing solar activity forecasting methods.
Abstract: Group contribution methods such as the UNIFAC are
very useful to researchers and engineers involved in synthesis,
feasibility studies, design and optimization of separation processes.
They can be applied successfully to predict phase equilibrium and
excess properties in the development of chemical and separation
processes. The main focus of this work was to investigate the
possibility of absorbing selected volatile organic compounds (VOCs)
into polydimethylsiloxane (PDMS) using three selected UNIFAC
group contribution methods. Absorption followed by subsequent
stripping is the predominant available abatement technology of
VOCs from flue gases prior to their release into the atmosphere. The
original, modified and effective UNIFAC models were used in this
work. The thirteen selected VOCs that have been considered in this
research are: pentane, hexane, heptanes, trimethylamine, toluene,
xylene, cyclohexane, butyl acetate, diethyl acetate, chloroform,
acetone, ethyl methyl ketone and isobutyl methyl ketone. The
computation was done for solute VOC concentration of 8.55x10-8
which is well in the infinite dilution region. The results obtained in
this study compare very well with those published in literature
obtained through both measurements and predictions. The phase
equilibrium obtained in this study show that PDMS is a good
absorbent for the removal of VOCs from contaminated air streams
through physical absorption.
Abstract: The angular distribution of Compton scattering of two
quanta originating in the annihilation of a positron with an electron
is investigated as a quantum key distribution (QKD) mechanism in
the gamma spectral range. The geometry of coincident Compton
scattering is observed on the two sides as a way to obtain partially
correlated readings on the quantum channel. We derive the noise
probability density function of a conceptually equivalent prepare
and measure quantum channel in order to evaluate the limits of the
concept in terms of the device secrecy capacity and estimate it at
roughly 1.9 bits per 1 000 annihilation events. The high error rate
is well above the tolerable error rates of the common reconciliation
protocols; therefore, the proposed key agreement protocol by public
discussion requires key reconciliation using classical error-correcting
codes. We constructed a prototype device based on the readily
available monolithic detectors in the least complex setup.
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Abstract: In this work, a special case of the image superresolution
problem where the only type of motion is global
translational motion and the blurs are shift-invariant is investigated.
The necessary conditions for exact reconstruction of the original
image by using finite impulse-response reconstruction filters are
developed. Given that the conditions are satisfied, a method for exact
super-resolution is presented and some simulation results are shown.
Abstract: Springback is a significant problem in the sheet metal
forming process. When the tools are released after the stage of
forming, the product springs out, because of the action of the internal
stresses. In many cases the deviation of form is too large and the
compensation of the springback is necessary. The precise prediction
of the springback of product is increasingly significant for the design
of the tools and for compensation because of the higher ratio of the
yield stress to the elastic modulus.
The main object in this paper was to study the effect of the
anisotropy on the springback for three directions of rolling: 0°, 45°
and 90°. At the same time, we highlighted the influence of three
different metallic materials: Aluminum, Steel and Galvanized steel.
The original of our purpose consist on tests which are ensured by
adapting a U-type stretching-bending device on a tensile testing
machine, where we studied and quantified the variation of the
springback according to the direction of rolling. We also showed the
role of lubrication in the reduction of the springback.
Moreover, in this work, we have studied important characteristics
in deep drawing process which is a springback. We have presented
defaults that are showed in this process and many parameters
influenced a springback.
Finally, our results works lead us to understand the influence of
grains orientation with different metallic materials on the springback
and drawing some conclusions how to concept deep drawing tools. In
addition, the conducted work represents a fundamental contribution
in the discussion the industry application.
Abstract: Discretization of spatial derivatives is an important
issue in meshfree methods especially when the derivative terms
contain non-linear coefficients. In this paper, various methods used
for discretization of second-order spatial derivatives are investigated
in the context of Smoothed Particle Hydrodynamics. Three popular
forms (i.e. "double summation", "second-order kernel derivation",
and "difference scheme") are studied using one-dimensional unsteady
heat conduction equation. To assess these schemes, transient response
to a step function initial condition is considered. Due to parabolic
nature of the heat equation, one can expect smooth and monotone
solutions. It is shown, however in this paper, that regardless of
the type of kernel function used and the size of smoothing radius,
the double summation discretization form leads to non-physical
oscillations which persist in the solution. Also, results show that when
a second-order kernel derivative is used, a high-order kernel function
shall be employed in such a way that the distance of inflection
point from origin in the kernel function be less than the nearest
particle distance. Otherwise, solutions may exhibit oscillations near
discontinuities unlike the "difference scheme" which unconditionally
produces monotone results.
Abstract: The Czech Republic is a country whose economy has
undergone a transformation since 1989. Since joining the EU it has
been striving to reduce the differences in its economic standard and
the quality of its institutional environment in comparison with
developed countries. According to an assessment carried out by the
World Bank, the Czech Republic was long classed as a country
whose institutional development was seen as problematic. For many
years one of the things it was rated most poorly on was its bankruptcy
law. The new Insolvency Act, which is a modern law in terms of its
treatment of bankruptcy, was first adopted in the Czech Republic in
2006. This law, together with other regulatory measures, offers debtridden
Czech economic subjects legal instruments which are well
established and in common practice in developed market economies.
Since then, analyses performed by the World Bank and the London
EBRD have shown that there have been significant steps forward in
the quality of Czech bankruptcy law. The Czech Republic still lacks
an analytical apparatus which can offer a structured characterisation
of the general and specific conditions of Czech company and
household debt which is subject to current changes in the global
economy. This area has so far not been given the attention it
deserves. The lack of research is particularly clear as regards analysis
of household debt and householders- ability to settle their debts in a
reasonable manner using legal and other state means of regulation.
We assume that Czech households have recourse to a modern
insolvency law, yet the effective application of this law is hampered
by the inconsistencies in the formal and informal institutions
involved in resolving debt. This in turn is based on the assumption
that this lack of consistency is more marked in cases of personal
bankruptcy. Our aim is to identify the symptoms which indicate that
for some time the effective application of bankruptcy law in the
Czech Republic will be hindered by factors originating in
householders- relative inability to identify the risks of falling into
debt.
Abstract: This paper provides a scheme to improve the read efficiency of anti-collision algorithm in EPCglobal UHF Class-1 Generation-2 RFID standard. In this standard, dynamic frame slotted ALOHA is specified to solve the anti-collision problem. Also, the Q-algorithm with a key parameter C is adopted to dynamically adjust the frame sizes. In the paper, we split the C parameter into two parameters to increase the read speed and derive the optimal values of the two parameters through simulations. The results indicate our method outperforms the original Q-algorithm.
Abstract: The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.
Abstract: The construction of original functional sample of the portable device for fast analysis of energetic materials has been described in the paper. The portable device consisting of two parts – an original miniaturized microcolumn liquid chromatograph and a unique chemiluminescence detector – has been proposed and realized. In a very short time, this portable device is capable of identifying selectively most of military nitramine- and nitroesterbased explosives as well as inorganic nitrates occurring in trace concentrations in water or in soil. The total time required for the identification of extracts is shorter than 8 minutes.
Abstract: A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D, G, A, U, C}, where the letter D represent one or more hypothetical bases with unspecific pairing. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvements of a primitive DNA repair system could make possible the transition from the ancient to the modern genetic code. Our results suggest that the Watson-Crick base pairing and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as the transition from the former to the later. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences.
Abstract: The past decade has seen enormous growth in the amount of software produced. However, given the ever increasing complexity of the software being developed and the concomitant rise in the typical project size, managers are becoming increasingly aware of the importance of issues that influence the productivity levels of the project teams involved. By analyzing the latest release of ISBSG data repository, we report on the factors found to significantly influence the productivity among which average team size and language type are the two most essential ones. Building on this we present an original model for evaluating the potential productivity during the project planning stage.