Abstract: In this paper developed and realized absolutely new
algorithm for solving three-dimensional Poisson equation. This
equation used in research of turbulent mixing, computational fluid
dynamics, atmospheric front, and ocean flows and so on. Moreover in
the view of rising productivity of difficult calculation there was
applied the most up-to-date and the most effective parallel
programming technology - MPI in combination with OpenMP
direction, that allows to realize problems with very large data
content. Resulted products can be used in solving of important
applications and fundamental problems in mathematics and physics.
Abstract: Optical 3D measurement of objects is meaningful in
numerous industrial applications. In various cases shape acquisition
of weak textured objects is essential. Examples are repetition parts
made of plastic or ceramic such as housing parts or ceramic bottles as
well as agricultural products like tubers. These parts are often
conveyed in a wobbling way during the automated optical inspection.
Thus, conventional 3D shape acquisition methods like laser scanning
might fail. In this paper, a novel approach for acquiring 3D shape of
weak textured and moving objects is presented. To facilitate such
measurements an active stereo vision system with structured light is
proposed. The system consists of multiple camera pairs and auxiliary
laser pattern generators. It performs the shape acquisition within one
shot and is beneficial for rapid inspection tasks. An experimental
setup including hardware and software has been developed and
implemented.
Abstract: The successful implementation of Service-Oriented Architecture (SOA) is not confined to Information Technology systems and required changes of the whole enterprise. In order to adapt IT and business, the enterprise requires adequate and measurable methods. The adoption of SOA creates new problem with regard to measuring and analysis the performance. In fact the enterprise should investigate to what extent the development of services will increase the value of business. It is required for every business to measure the extent of SOA adaptation with the goals of enterprise. Moreover, precise performance metrics and their combination with the advanced evaluation methodologies as a solution should be defined. The aim of this paper is to present a systematic methodology for designing a measurement system at the technical and business levels, so that: (1) it will determine measurement metrics precisely (2) the results will be analysed by mapping identified metrics to the measurement tools.
Abstract: Shape memory alloy (SMA) actuators have found a
wide range of applications due to their unique properties such as high
force, small size, lightweight and silent operation. This paper presents
the development of compact (SMA) actuator and cooling system in
one unit. This actuator is developed for multi-fingered hand. It
consists of nickel-titanium (Nitinol) SMA wires in compact forming.
The new arrangement insulates SMA wires from the human body by
housing it in a heat sink and uses a thermoelectric device for rejecting
heat to improve the actuator performance. The study uses
optimization methods for selecting the SMA wires geometrical
parameters and the material of a heat sink. The experimental work
implements the actuator prototype and measures its response.
Abstract: The ability of the brain to organize information and generate the functional structures we use to act, think and communicate, is a common and easily observable natural phenomenon. In object-oriented analysis, these structures are represented by objects. Objects have been extensively studied and documented, but the process that creates them is not understood. In this work, a new class of discrete, deterministic, dissipative, host-guest dynamical systems is introduced. The new systems have extraordinary self-organizing properties. They can host information representing other physical systems and generate the same functional structures as the brain does. A simple mathematical model is proposed. The new systems are easy to simulate by computer, and measurements needed to confirm the assumptions are abundant and readily available. Experimental results presented here confirm the findings. Applications are many, but among the most immediate are object-oriented engineering, image and voice recognition, search engines, and Neuroscience.
Abstract: Support vector regression (SVR) has been regarded
as a state-of-the-art method for approximation and regression. The
importance of kernel function, which is so-called admissible support
vector kernel (SV kernel) in SVR, has motivated many studies
on its composition. The Gaussian kernel (RBF) is regarded as a
“best" choice of SV kernel used by non-expert in SVR, whereas
there is no evidence, except for its superior performance on some
practical applications, to prove the statement. Its well-known that
reproducing kernel (R.K) is also a SV kernel which possesses many
important properties, e.g. positive definiteness, reproducing property
and composing complex R.K by simpler ones. However, there are a
limited number of R.Ks with explicit forms and consequently few
quantitative comparison studies in practice. In this paper, two R.Ks,
i.e. SV kernels, composed by the sum and product of a translation
invariant kernel in a Sobolev space are proposed. An exploratory
study on the performance of SVR based general R.K is presented
through a systematic comparison to that of RBF using multiple
criteria and synthetic problems. The results show that the R.K is
an equivalent or even better SV kernel than RBF for the problems
with more input variables (more than 5, especially more than 10) and
higher nonlinearity.
Abstract: A concern that researchers usually face in different
applications of Artificial Neural Network (ANN) is determination of
the size of effective domain in time series. In this paper, trial and
error method was used on groundwater depth time series to determine
the size of effective domain in the series in an observation well in
Union County, New Jersey, U.S. different domains of 20, 40, 60, 80,
100, and 120 preceding day were examined and the 80 days was
considered as effective length of the domain. Data sets in different
domains were fed to a Feed Forward Back Propagation ANN with
one hidden layer and the groundwater depths were forecasted. Root
Mean Square Error (RMSE) and the correlation factor (R2) of
estimated and observed groundwater depths for all domains were
determined. In general, groundwater depth forecast improved, as
evidenced by lower RMSEs and higher R2s, when the domain length
increased from 20 to 120. However, 80 days was selected as the
effective domain because the improvement was less than 1% beyond
that. Forecasted ground water depths utilizing measured daily data
(set #1) and data averaged over the effective domain (set #2) were
compared. It was postulated that more accurate nature of measured
daily data was the reason for a better forecast with lower RMSE
(0.1027 m compared to 0.255 m) in set #1. However, the size of input
data in this set was 80 times the size of input data in set #2; a factor
that may increase the computational effort unpredictably. It was
concluded that 80 daily data may be successfully utilized to lower the
size of input data sets considerably, while maintaining the effective
information in the data set.
Abstract: Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.
Abstract: Although backpropagation ANNs generally predict
better than decision trees do for pattern classification problems, they
are often regarded as black boxes, i.e., their predictions cannot be
explained as those of decision trees. In many applications, it is
desirable to extract knowledge from trained ANNs for the users to
gain a better understanding of how the networks solve the problems.
A new rule extraction algorithm, called rule extraction from artificial
neural networks (REANN) is proposed and implemented to extract
symbolic rules from ANNs. A standard three-layer feedforward ANN
is the basis of the algorithm. A four-phase training algorithm is
proposed for backpropagation learning. Explicitness of the extracted
rules is supported by comparing them to the symbolic rules generated
by other methods. Extracted rules are comparable with other methods
in terms of number of rules, average number of conditions for a rule,
and predictive accuracy. Extensive experimental studies on several
benchmarks classification problems, such as breast cancer, iris,
diabetes, and season classification problems, demonstrate the
effectiveness of the proposed approach with good generalization
ability.
Abstract: The feature extraction method(s) used to recognize
hand-printed characters play an important role in ICR applications.
In order to achieve high recognition rate for a recognition system, the
choice of a feature that suits for the given script is certainly an
important task. Even if a new feature required to be designed for a
given script, it is essential to know the recognition ability of the
existing features for that script. Devanagari script is being used in
various Indian languages besides Hindi the mother tongue of majority
of Indians. This research examines a variety of feature extraction
approaches, which have been used in various ICR/OCR applications,
in context to Devanagari hand-printed script. The study is conducted
theoretically and experimentally on more that 10 feature extraction
methods. The various feature extraction methods have been evaluated
on Devanagari hand-printed database comprising more than 25000
characters belonging to 43 alphabets. The recognition ability of the
features have been evaluated using three classifiers i.e. k-NN, MLP
and SVM.
Abstract: During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: In developing a text-to-speech system, it is well
known that the accuracy of information extracted from a text is
crucial to produce high quality synthesized speech. In this paper, a
new scheme for converting text into its equivalent phonetic spelling
is introduced and developed. This method is applicable to many
applications in text to speech converting systems and has many
advantages over other methods. The proposed method can also
complement the other methods with a purpose of improving their
performance. The proposed method is a probabilistic model and is
based on Smooth Ergodic Hidden Markov Model. This model can be
considered as an extension to HMM. The proposed method is applied
to Persian language and its accuracy in converting text to speech
phonetics is evaluated using simulations.
Abstract: In over deployed sensor networks, one approach
to Conserve energy is to keep only a small subset of sensors
active at Any instant. For the coverage problems, the monitoring
area in a set of points that require sensing, called demand points, and
consider that the node coverage area is a circle of range R, where R
is the sensing range, If the Distance between a demand point and
a sensor node is less than R, the node is able to cover this point. We
consider a wireless sensor network consisting of a set of sensors
deployed randomly. A point in the monitored area is covered if it is
within the sensing range of a sensor. In some applications, when the
network is sufficiently dense, area coverage can be approximated by
guaranteeing point coverage. In this case, all the points of wireless
devices could be used to represent the whole area, and the working
sensors are supposed to cover all the sensors. We also introduce
Hybrid Algorithm and challenges related to coverage in sensor
networks.
Abstract: This paper presents preliminary results regarding system-level power awareness for FPGA implementations in wireless sensor networks. Re-configurability of field programmable gate arrays (FPGA) allows for significant flexibility in its applications to embedded systems. However, high power consumption in FPGA becomes a significant factor in design considerations. We present several ideas and their experimental verifications on how to optimize power consumption at high level of designing process while maintaining the same energy per operation (low-level methods can be used additionally). This paper demonstrates that it is possible to estimate feasible power consumption savings even at the high level of designing process. It is envisaged that our results can be also applied to other embedded systems applications, not limited to FPGA-based.
Abstract: We succeeded to produce a high performance and flexible graphene/Manganese dioxide (G/MnO2) electrode coated on flexible polyethylene terephthalate (PET) substrate. The graphene film is initially synthesized by drop-casting the graphene oxide (GO) solution on the PET substrate, followed by simultaneous reduction and patterning of the dried film using carbon dioxide (CO2) laser beam with power of 1.8 W. Potentiostatic Anodic Deposition method was used to deposit thin film of MnO2 with different loading mass 10 – 50 and 100 μg.cm-2 on the pre-prepared graphene film. The electrodes were fully characterized in terms of structure, morphology, and electrochemical performance. A maximum specific capacitance of 973 F.g-1 was attributed when depositing 50μg.cm-2 MnO2 on the laser reduced graphene oxide rGO (or G/50MnO2) and over 92% of its initial capacitance was retained after 1000 cycles. The good electrochemical performance and long-term cycling stability make our proposed approach a promising candidate in the supercapacitor applications.
Abstract: Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.
Abstract: This paper proposes a new of cloud computing for individual computer users to share applications in distributed communities, called community-based personal cloud computing (CPCC). The paper also presents a prototype design and implementation of CPCC. The users of CPCC are able to share their computing applications with other users of the community. Any member of the community is able to execute remote applications shared by other members. The remote applications behave in the same way as their local counterparts, allowing the user to enter input, receive output as well as providing the access to the local data of the user. CPCC provides a peer-to-peer (P2P) environment where each peer provides applications which can be used by the other peers that are connected CPCC.
Abstract: In this article, we present a web server based solution
for implementing a system for intelligent navigation. In this solution
we use real time collected data and traffic history to establish the best
route for navigation. This is a low cost solution that is easily to
implement and extend. There is no need any infrastructure at road
network level except only a device that collect data about traffic in
key road crossing. The presented solution creates a strong base for
traffic pursuit and offers an infrastructure for navigation applications.
Abstract: The importance for manipulating an incorporated
scaffold and directing cell behaviors is well appreciated for tissue
engineering. Here, we developed newly nano-topographic oxidized
silicon nanosponges capable of being various chemical modifications
to provide much insight into the fundamental biology of how cells
interact with their surrounding environment in vitro. A wet etching
technique is exerted to allow us fabricated the silicon nanosponges in a
high-throughput manner. Furthermore, various organo-silane
chemicals enabled self-assembled on the surfaces by vapor deposition.
We have found that Chinese hamster ovary (CHO) cells displayed
certain distinguishable morphogenesis, adherent responses, and
biochemical properties while cultured on these chemical modified
nano-topographic structures in compared with the planar oxidized
silicon counterparts, indicating that cell behaviors can be influenced
by certain physical characteristic derived from nano-topography in
addition to the hydrophobicity of contact surfaces crucial for cell
adhesion and spreading. Of particular, there were predominant
nano-actin punches and slender protrusions formed while cells were
cultured on the nano-topographic structures. This study shed potential
applications of these nano-topographic biomaterials for controlling
cell development in tissue engineering or basic cell biology research.