Abstract: The recent advances in computational fluid dynamics
(CFD) can be useful in observing the detailed hemodynamics in
cerebral aneurysms for understanding not only their formation and
rupture but also for clinical evaluation and treatment. However,
important hemodynamic quantities are difficult to measure in vivo. In
the present study, an approximate model of normal middle cerebral
artery (MCA) along with two cases consisting broad and narrow
saccular aneurysms are analyzed. The models are generated in
ANSYS WORKBENCH and transient analysis is performed in
ANSYS-CFX. The results obtained are compared for three cases and
agree well with the available literature.
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: This paper presents a numerical investigation of the
unsteady flow around an American 19th century vertical-axis
windmill: the Stevens & Jolly rotor, patented on April 16, 1895. The
computational approach used is based on solving the complete
transient Reynolds-Averaged Navier-Stokes (t-RANS) equations: a
full campaign of numerical simulation has been performed using the
k-ω SST turbulence model. Flow field characteristics have been
investigated for several values of tip speed ratio and for a constant
unperturbed free-stream wind velocity of 6 m/s, enabling the study of
some unsteady flow phenomena in the rotor wake. Finally, the global
power generated from the windmill has been determined for each
simulated angular velocity, allowing the calculation of the rotor
power-curve.
Abstract: This paper suggests ranking alternatives under fuzzy
MCDM (multiple criteria decision making) via an centroid based
ranking approach, where criteria are classified to benefit qualitative,
benefit quantitative and cost quantitative ones. The ratings of
alternatives versus qualitative criteria and the importance weights of
all criteria are assessed in linguistic values represented by fuzzy
numbers. The membership function for the final fuzzy evaluation
value of each alternative can be developed through α-cuts and
interval arithmetic of fuzzy numbers. The distance between the
original point and the relative centroid is applied to defuzzify the
final fuzzy evaluation values in order to rank alternatives. Finally a
numerical example demonstrates the computation procedure of the
proposed model.
Abstract: Parallel Prefix addition is a technique for improving
the speed of binary addition. Due to continuing integrating intensity
and the growing needs of portable devices, low-power and highperformance
designs are of prime importance. The classical parallel
prefix adder structures presented in the literature over the years
optimize for logic depth, area, fan-out and interconnect count of logic
circuits. In this paper, a new architecture for performing 8-bit, 16-bit
and 32-bit Parallel Prefix addition is proposed. The proposed prefix
adder structures is compared with several classical adders of same
bit width in terms of power, delay and number of computational
nodes. The results reveal that the proposed structures have the least
power delay product when compared with its peer existing Prefix
adder structures. Tanner EDA tool was used for simulating the adder
designs in the TSMC 180 nm and TSMC 130 nm technologies.
Abstract: Dealing with hundreds of features in character
recognition systems is not unusual. This large number of features
leads to the increase of computational workload of recognition
process. There have been many methods which try to remove
unnecessary or redundant features and reduce feature dimensionality.
Besides because of the characteristics of Farsi scripts, it-s not
possible to apply other languages algorithms to Farsi directly. In this
paper some methods for feature subset selection using genetic
algorithms are applied on a Farsi optical character recognition (OCR)
system. Experimental results show that application of genetic
algorithms (GA) to feature subset selection in a Farsi OCR results in
lower computational complexity and enhanced recognition rate.
Abstract: We present an operator for a propositional linear temporal logic over infinite schedules of iterated transactions, which, when applied to a formula, asserts that any schedule satisfying the formula is serializable. The resulting logic is suitable for specifying and verifying consistency properties of concurrent transaction management systems, that can be defined in terms of serializability, as well as other general safety and liveness properties. A strict form of serializability is used requiring that, whenever the read and write steps of a transaction occurrence precede the read and write steps of another transaction occurrence in a schedule, the first transaction must precede the second transaction in an equivalent serial schedule. This work improves on previous work in providing a propositional temporal logic with a serializability operator that is of the same PSPACE complete computational complexity as standard propositional linear temporal logic without a serializability operator.
Abstract: Despite the extensive use of eLearning systems, there
is no consensus on a standard framework for evaluating this kind of
quality system. Hence, there is only a minimum set of tools that can
supervise this judgment and gives information about the course
content value. This paper presents two kinds of quality set evaluation
indicators for eLearning courses based on the computational process
of three known metrics, the Euclidian, Hamming and Levenshtein
distances. The “distance" calculus is applied to standard evaluation
templates (i.e. the European Commission Programme procedures vs.
the AFNOR Z 76-001 Standard), determining a reference point in the
evaluation of the e-learning course quality vs. the optimal concept(s).
The case study, based on the results of project(s) developed in the
framework of the European Programme “Leonardo da Vinci", with
Romanian contractors, try to put into evidence the benefits of such a
method.
Abstract: Existing work in temporal logic on representing the
execution of infinitely many transactions, uses linear-time temporal
logic (LTL) and only models two-step transactions. In this paper,
we use the comparatively efficient branching-time computational tree
logic CTL and extend the transaction model to a class of multistep
transactions, by introducing distinguished propositional variables
to represent the read and write steps of n multi-step transactions
accessing m data items infinitely many times. We prove that the
well known correspondence between acyclicity of conflict graphs
and serializability for finite schedules, extends to infinite schedules.
Furthermore, in the case of transactions accessing the same set of
data items in (possibly) different orders, serializability corresponds
to the absence of cycles of length two. This result is used to give an
efficient encoding of the serializability condition into CTL.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: In the present paper, an improved initial value
numerical technique is presented to analyze the free vibration of
symmetrically laminated rectangular plate. A combination of the
initial value method (IV) and the finite differences (FD) devices is
utilized to develop the present (IVFD) technique. The achieved
technique is applied to the equation of motion of vibrating laminated
rectangular plate under various types of boundary conditions. Three
common types of laminated symmetrically cross-ply, orthotropic and
isotropic plates are analyzed here. The convergence and accuracy of
the presented Initial Value-Finite Differences (IVFD) technique have
been examined. Also, the merits and validity of improved technique
are satisfied via comparing the obtained results with those available
in literature indicating good agreements.
Abstract: This paper proposes, implements and evaluates an original discretization method for continuous random variables, in order to estimate the reliability of systems for which stress and strength are defined as complex functions, and whose reliability is not derivable through analytic techniques. This method is compared to other two discretizing approaches appeared in literature, also through a comparative study involving four engineering applications. The results show that the proposal is very efficient in terms of closeness of the estimates to the true (simulated) reliability. In the study we analyzed both a normal and a non-normal distribution for the random variables: this method is theoretically suitable for each parametric family.
Abstract: The process for predicting the ballistic properties of a liquid rocket engine is based on the quantitative estimation of idealized performance deviations. In this aim, an equilibrium chemistry procedure is firstly developed and implemented in a Fortran routine. The thermodynamic formulation allows for the calculation of the theoretical performances of a rocket thrust chamber. In a second step, a computational fluid dynamic analysis of the turbulent reactive flow within the chamber is performed using a finite volume approach. The obtained values for the “quasi-real" performances account for both turbulent mixing and chemistryturbulence coupling. In the present work, emphasis is made on the combustion efficiency performance for which deviation is mainly due to radial gradients of static temperature and mixture ratio. Numerical values of the characteristic velocity are successfully compared with results from an industry-used code. The results are also confronted with the experimental data of a laboratory-scale rocket engine.
Abstract: We estimate snow velocity and snow drift density on hilly terrain under the assumption that the drifting snow mass can be represented using a micro-continuum approach (i.e. using a nonclassical mechanics approach assuming a class of fluids for which basic equations of mass, momentum and energy have been derived). In our model, the theory of coupled stress fluids proposed by Stokes [1] has been employed for the computation of flow parameters. Analyses of bulk drift velocity, drift density, drift transport and mass transport of snow particles have been carried out and computations made, considering various parametric effects. Results are compared with those of classical mechanics (logarithmic wind profile). The results indicate that particle size affects the flow characteristics significantly.
Abstract: Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: The presence of harmonic in power system is a major
concerned to power engineers for many years. With the increasing
usage of nonlinear loads in power systems, the harmonic pollution
becomes more serious. One of the widely used computation
algorithm for harmonic analysis is fast Fourier transform (FFT). In
this paper, a harmonic analyzer using FFT was implemented on
TMS320C6713 DSK. The supply voltage of 240 V 59 Hz is stepped
down to 5V using a voltage divider in order to match the power
rating of the DSK input. The output from the DSK was displayed on
oscilloscope and Code Composer Studio™ software. This work has
demonstrated the possibility of analyzing the 240V power supply
harmonic content using the DSK board.
Abstract: Training neural networks to capture an intrinsic
property of a large volume of high dimensional data is a difficult
task, as the training process is computationally expensive. Input
attributes should be carefully selected to keep the dimensionality of
input vectors relatively small.
Technical indexes commonly used for stock market prediction
using neural networks are investigated to determine its effectiveness
as inputs. The feed forward neural network of Levenberg-Marquardt
algorithm is applied to perform one step ahead forecasting of
NASDAQ and Dow stock prices.
Abstract: This work presents a matched field processing (MFP)
algorithm based on Dopplerlet transform for estimating the motion
parameters of a sound source moving along a straight line and with a
constant speed by using a piecewise strategy, which can significantly
reduce the computational burden. Monte Carlo simulation results and
an experimental result are presented to verify the effectiveness of the
algorithm advocated.
Abstract: The increasing complexity of software development based on peer to peer networks makes necessary the creation of new frameworks in order to simplify the developer-s task. Additionally, some applications, e.g. fire detection or security alarms may require real-time constraints and the high level definition of these features eases the application development. In this paper, a service model based on a component model with real-time features is proposed. The high-level model will abstract developers from implementation tasks, such as discovery, communication, security or real-time requirements. The model is oriented to deploy services on small mobile devices, such as sensors, mobile phones and PDAs, where the computation is light-weight. Services can be composed among them by means of the port concept to form complex ad-hoc systems and their implementation is carried out using a component language called UM-RTCOM. In order to apply our proposals a fire detection application is described.