Abstract: Discretization of spatial derivatives is an important
issue in meshfree methods especially when the derivative terms
contain non-linear coefficients. In this paper, various methods used
for discretization of second-order spatial derivatives are investigated
in the context of Smoothed Particle Hydrodynamics. Three popular
forms (i.e. "double summation", "second-order kernel derivation",
and "difference scheme") are studied using one-dimensional unsteady
heat conduction equation. To assess these schemes, transient response
to a step function initial condition is considered. Due to parabolic
nature of the heat equation, one can expect smooth and monotone
solutions. It is shown, however in this paper, that regardless of
the type of kernel function used and the size of smoothing radius,
the double summation discretization form leads to non-physical
oscillations which persist in the solution. Also, results show that when
a second-order kernel derivative is used, a high-order kernel function
shall be employed in such a way that the distance of inflection
point from origin in the kernel function be less than the nearest
particle distance. Otherwise, solutions may exhibit oscillations near
discontinuities unlike the "difference scheme" which unconditionally
produces monotone results.
Abstract: Conventional WBL is effective for meaningful student, because rote student learn by repeating without thinking or trying to understand. It is impossible to have full benefit from conventional WBL. Understanding of rote student-s intention and what influences it becomes important. Poorly designed user interface will discourage rote student-s cultivation and intention to use WBL. Thus, user interface design is an important factor especially when WBL is used as comprehensive replacement of conventional teaching. This research proposes the influencing factors that can enhance student-s intention to use the system. The enhanced TAM is used for evaluating the proposed factors. The research result points out that factors influencing rote student-s intention are Perceived Usefulness of Homepage Content Structure, Perceived User Friendly Interface, Perceived Hedonic Component, and Perceived (homepage) Visual Attractiveness.
Abstract: The main problem for recognition of handwritten Persian digits using Neural Network is to extract an appropriate feature vector from image matrix. In this research an asymmetrical segmentation pattern is proposed to obtain the feature vector. This pattern can be adjusted as an optimum model thanks to its one degree of freedom as a control point. Since any chosen algorithm depends on digit identity, a Neural Network is used to prevail over this dependence. Inputs of this Network are the moment of inertia and the center of gravity which do not depend on digit identity. Recognizing the digit is carried out using another Neural Network. Simulation results indicate the high recognition rate of 97.6% for new introduced pattern in comparison to the previous models for recognition of digits.
Abstract: The distance between two objects is an important
problem in CAGD, CAD and CG etc. It will be presented in this paper
that a simple and quick method to estimate the distance between a
point and a Bezier curve on a Bezier surface.
Abstract: In this paper a simple terrain evaluation method for
hexapod robot is introduced. This method is based on feet coordinate
evaluation when all are on the ground. Depending on the feet
coordinate differences the local terrain evaluation is possible. Terrain
evaluation is necessary for right gait selection and/or body position
correction. For terrain roughness evaluation three planes are plotted:
two of them as definition points use opposite feet coordinates, third
coincides with the robot body plane. The leaning angle of body plane
is evaluated measuring gravity force using three-axis accelerometer.
Terrain roughness evaluation method is based on angle estimation
between normal vectors of these planes. Aim of this work is to
present a simple method for embedded robot controller, allowing to
find the best further movement settings.
Abstract: Prior research evidenced that unimodal biometric
systems have several tradeoffs like noisy data, intra-class variations,
restricted degrees of freedom, non-universality, spoof attacks, and
unacceptable error rates. In order for the biometric system to be more
secure and to provide high performance accuracy, more than one
form of biometrics are required. Hence, the need arise for multimodal
biometrics using combinations of different biometric modalities. This
paper introduces a multimodal biometric system (MMBS) based on
fusion of whole dorsal hand geometry and fingerprints that acquires
right and left (Rt/Lt) near-infra-red (NIR) dorsal hand geometry (HG)
shape and (Rt/Lt) index and ring fingerprints (FP). Database of 100
volunteers were acquired using the designed prototype. The acquired
images were found to have good quality for all features and patterns
extraction to all modalities. HG features based on the hand shape
anatomical landmarks were extracted. Robust and fast algorithms for
FP minutia points feature extraction and matching were used. Feature
vectors that belong to similar biometric traits were fused using
feature fusion methodologies. Scores obtained from different
biometric trait matchers were fused using the Min-Max
transformation-based score fusion technique. Final normalized scores
were merged using the sum of scores method to obtain a single
decision about the personal identity based on multiple independent
sources. High individuality of the fused traits and user acceptability
of the designed system along with its experimental high performance
biometric measures showed that this MMBS can be considered for
med-high security levels biometric identification purposes.
Abstract: This study develops a relation to explore the factors influencing management and technology capabilities in strategic alliances. Alliances between firms are recognizing increasingly popular as a vehicle to create and extract greater value from the market. Firm’s alliance can be described as the collaborative problem solving process to solve problems jointly. This study starts from research questions what factors of firm’s management and technology characteristics affect performance of firms which are formed alliances. In this study, we investigated the effect of strategic alliances on company performance. That is, we try to identify whether firms made an alliance with other organizations are differed by characteristics of management and technology. And we test that alliance type and alliance experiences moderate the relationship between firm’s capabilities and its performance. We employ problem-solving perspective and resource-based view perspective to shed light on this research questions. The empirical work is based on the Survey of Business Activities conducted from2006 to 2008 by Statistics Korea. We verify correlations between to point out that these results contribute new empirical evidence on the effect of strategic alliances on company performance.
Abstract: Nonlinear and unbalance loads in three phase
networks create harmonics and losses. Active and passive filters are
used for elimination or reduction of these effects. Passive filters have
some limitations. For example, they are designed only for a specific
frequency and they may cause to resonance in the network at the
point of common coupling. The other drawback of a passive filter is
that the sizes of required elements are normally large. The active
filter can improve some of limitations of passive filter for example;
they can eliminate more than one harmonic and don't cause resonance
in the network. In this paper inverter analysis have been done
simultaneously in three phase and the RL impedance of the line have
been considered. A sliding mode control based on energy feedback of
capacitors is employed in the design with this method, the dynamic
speed of the filter is improved effectively and harmonics and load
unbalance is compensating quickly.
Abstract: This paper introduces and proves new concept of salt
dissolving in water as very tiny solid sodium chloride particles of
nanovolumes, from this point of view salt water can be desalinated by
collision with special surface characterized by smoothness upon nano
level, high rigidity, high hardness under appropriate conditions of
water launching in the form of thin laminar flow under suitable speed
and angle of incidence to get desalinated water.
Abstract: A new tool path planning method for 5-axis flank
milling of a globoidal indexing cam is developed in this paper. The
globoidal indexing cam is a practical transmission mechanism due
to its high transmission speed, accuracy and dynamic performance.
Machining the cam profile is a complex and precise task. The profile
surface of the globoidal cam is generated by the conjugate contact
motion of the roller. The generated complex profile surface is usually
machined by 5-axis point-milling method. The point-milling method
is time-consuming compared with flank milling. The tool path for
5-axis flank milling of globoidal cam is developed to improve the
cutting efficiency. The flank milling tool path is globally optimized
according to the minimum zone criterion, and high accuracy is
guaranteed. The computational example and cutting simulation finally
validate the developed method.
Abstract: Aims for this study: first, to compare the expertise
level in data analysis, communication and information technologies
in undergraduate psychology students. Second, to verify the factor
structure of E-ETICA (Escala de Experticia en Tecnologias de la Informacion, la Comunicacion y el Análisis or Data Analysis,
Communication and Information'Expertise Scale) which had shown
an excellent internal consistency (α= 0.92) as well as a simple factor
structure. Three factors, Complex, Basic Information and
Communications Technologies and E-Searching and Download
Abilities, explains 63% of variance. In the present study, 260
students (119 juniors and 141 seniors) were asked to respond to
ETICA (16 items Likert scale of five points 1: null domain to 5: total
domain). The results show that both junior and senior students report
having very similar expertise level; however, E-ETICA presents a
different factor structure for juniors and four factors explained also
63% of variance: Information E-Searching, Download and Process;
Data analysis; Organization; and Communication technologies.
Abstract: In this work, we consider the rational points on elliptic curves over finite fields Fp where p ≡ 5 (mod 6). We obtain results on the number of points on an elliptic curve y2 ≡ x3 + a3(mod p), where p ≡ 5 (mod 6) is prime. We give some results concerning the sum of the abscissae of these points. A similar case where p ≡ 1 (mod 6) is considered in [5]. The main difference between two cases is that when p ≡ 5 (mod 6), all elements of Fp are cubic residues.
Abstract: Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.
Abstract: In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.
Abstract: This paper presents a solution for a robotic
manipulation problem. We formulate the problem as combining
target identification, tracking and interception. The task in our
solution is sensing a target on a conveyor belt and then intercepting
robot-s end-effector at a convenient rendezvous point. We used
an object recognition method which identifies the target and finds
its position from visualized scene picture, then the robot system
generates a solution for rendezvous problem using the target-s initial
position and belt velocity . The interception of the target and the
end-effector is executed at a convenient rendezvous point along the
target-s calculated trajectory. Experimental results are obtained using
a real platform with an industrial robot and a vision system over it.
Abstract: A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Abstract: In Blind Source Separation (BSS) processing, taking
advantage of scaling factor indetermination and based on the floatingpoint
representation, we propose a scaling technique applied to the
separation matrix, to avoid the saturation or the weakness in the
recovered source signals. This technique performs an Automatic Gain
Control (AGC) in an on-line BSS environment. We demonstrate
the effectiveness of this technique by using the implementation of
a division free BSS algorithm with two input, two output. This
technique is computationally cheaper and efficient for a hardware
implementation.
Abstract: Although the field of parametric Pattern Recognition (PR) has been thoroughly studied for over five decades, the use of the Order Statistics (OS) of the distributions to achieve this has not been reported. The pioneering work on using OS for classification was presented in [1] for the Uniform distribution, where it was shown that optimal PR can be achieved in a counter-intuitive manner, diametrically opposed to the Bayesian paradigm, i.e., by comparing the testing sample to a few samples distant from the mean. This must be contrasted with the Bayesian paradigm in which, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding central points, for example, the means. In [2], we showed that the results could be extended for a few symmetric distributions within the exponential family. In this paper, we attempt to extend these results significantly by considering asymmetric distributions within the exponential family, for some of which even the closed form expressions of the cumulative distribution functions are not available. These distributions include the Rayleigh, Gamma and certain Beta distributions. As in [1] and [2], the new scheme, referred to as Classification by Moments of Order Statistics (CMOS), attains an accuracy very close to the optimal Bayes’ bound, as has been shown both theoretically and by rigorous experimental testing.
Abstract: The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.
Abstract: A simple network model is developed in OPNET to
study the performance of the Wi-Fi protocol. The model is simulated
in OPNET and performance factors such as load, throughput and delay
are analysed from the model. Four applications such as oracle, http, ftp
and voice are applied over the Wireless LAN network to determine the
throughput. The voice application utilises a considerable amount of
bandwidth of up to 5Mbps, as a result the 802.11g standard of the
Wi-Fi protocol was chosen which can support a data rate of up to
54Mbps. Results indicate that when the load in the Wi-Fi network is
increased the queuing delay on the point-to-point links in the Wi-Fi
network significantly reduces until it is comparable to that of WiMAX.
In conclusion, the queuing delay of the Wi-Fi protocol for the network
model simulated was about 0.00001secs comparable to WiMAX
network values.