Abstract: This study is to investigate the electroencephalogram (EEG) differences generated from a normal and Alzheimer-s disease (AD) sources. We also investigate the effects of brain tissue distortions due to AD on EEG. We develop a realistic head model from T1 weighted magnetic resonance imaging (MRI) using finite element method (FEM) for normal source (somatosensory cortex (SC) in parietal lobe) and AD sources (right amygdala (RA) and left amygdala (LA) in medial temporal lobe). Then, we compare the AD sourced EEGs to the SC sourced EEG for studying the nature of potential changes due to sources and 5% to 20% brain tissue distortions. We find an average of 0.15 magnification errors produced by AD sourced EEGs. Different brain tissue distortion models also generate the maximum 0.07 magnification. EEGs obtained from AD sources and different brain tissue distortion levels vary scalp potentials from normal source, and the electrodes residing in parietal and temporal lobes are more sensitive than other electrodes for AD sourced EEG.
Abstract: In present work the problem of the ITER fusion
plasma neutron source parameter reconstruction using only the
Vertical Neutron Camera data was solved. The possibility of neutron
source parameter reconstruction was estimated by the numerical
simulations and the analysis of adequateness of mathematic model
was performed. The neutron source was specified in a parametric
form. The numerical analysis of solution stability with respect to data
distortion was done. The influence of the data errors on the
reconstructed parameters is shown:
• is reconstructed with errors less than 4% at all examined values
of δ (until 60%);
• is determined with errors less than 10% when δ do not overcome
5%;
• is reconstructed with relative error more than 10 %;
• integral intensity of the neutron source is determined with error
10% while δ error is less than 15%;
where -error of signal measurements, (R0,Z0), the plasma center
position,- /parameter of neutron source profile.
Abstract: The composition, vapour pressure, and heat capacity
of nine biodiesel fuels from different sources were measured. The
vapour pressure of the biodiesel fuels is modeled assuming an ideal
liquid phase of the fatty acid methyl esters constituting the fuel. New
methodologies to calculate the vapour pressure and ideal gas and
liquid heat capacities of the biodiesel fuel constituents are proposed.
Two alternative optimization scenarios are evaluated: 1) vapour
pressure only; 2) vapour pressure constrained with liquid heat
capacity. Without physical constraints, significant errors in liquid
heat capacity predictions were found whereas the constrained
correlation accurately fit both vapour pressure and liquid heat
capacity.
Abstract: In this paper parametric analytical studies have been carried out to examine the intrinsic flow physics pertaining to the liftoff time of solid propellant rockets. Idealized inert simulators of solid rockets are selected for numerical studies to examining the preignition chamber dynamics. Detailed diagnostic investigations have been carried out using an unsteady two-dimensional k-omega turbulence model. We conjectured from the numerical results that the altered variations of the igniter jet impingement angle, turbulence level, time and location of the first ignition, flame spread characteristics, the overall chamber dynamics including the boundary layer growth history are having bearing on the time for nozzle flow chocking for establishing the required thrust for the rocket liftoff. We concluded that the altered flow choking time of strap-on motors with the pre-determined identical ignition time at the lift off phase will lead to the malfunctioning of the rocket. We also concluded that, in the light of the space debris, an error in predicting the liftoff time can lead to an unfavorable launch window amounts the satellite injection errors and/or the mission failures.
Abstract: Longitudinal data typically have the characteristics of
changes over time, nonlinear growth patterns, between-subjects
variability, and the within errors exhibiting heteroscedasticity and
dependence. The data exploration is more complicated than that of
cross-sectional data. The purpose of this paper is to organize/integrate
of various visual-graphical techniques to explore longitudinal data.
From the application of the proposed methods, investigators can
answer the research questions include characterizing or describing the
growth patterns at both group and individual level, identifying the time
points where important changes occur and unusual subjects, selecting
suitable statistical models, and suggesting possible within-error
variance.
Abstract: In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasi-stationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.
Abstract: This paper mainly studies the analyses of parameters
in the intersection collision avoidance (ICA) system based on the radar
sensors. The parameters include the positioning errors, the repeat
period of the radar sensor, the conditions of potential collisions of two
cross-path vehicles, etc. The analyses of the parameters can provide
the requirements, limitations, or specifications of this ICA system. In
these analyses, the positioning errors will be increased as the measured
vehicle approach the intersection. In addition, it is not necessary to
implement the radar sensor in higher position since the positioning
sensitivities become serious as the height of the radar sensor increases.
A concept of the safety buffer distances for front and rear of the
measured vehicle is also proposed. The conditions for potential
collisions of two cross-path vehicles are also presented to facilitate the
computation algorithm.
Abstract: Internal controls of accounting are an essential
business function for a growth-oriented organization, and include the
elements of risk assessment, information communications and even
employees' roles and responsibilities. Internal controls of accounting
systems are designed to protect a company from fraud, abuse and
inaccurate data recording and help organizations keep track of
essential financial activities. Internal controls of accounting provide a
streamlined solution for organizing all accounting procedures and
ensuring that the accounting cycle is completed consistently and
successfully. Implementing a formal Accounting Procedures Manual
for the organization allows the financial department to facilitate
several processes and maintain rigorous standards. Internal controls
also allow organizations to keep detailed records, manage and
organize important financial transactions and set a high standard for
the organization's financial management structure and protocols. A
well-implemented system also reduces the risk of accounting errors
and abuse. A well-implemented controls system allows a company's
financial managers to regulate and streamline all functions of the
accounting department. Internal controls of accounting can be set up
for every area to track deposits, monitor check handling, keep track
of creditor accounts, and even assess budgets and financial statements
on an ongoing basis. Setting up an effective accounting system to
monitor accounting reports, analyze records and protect sensitive
financial information also can help a company set clear goals and
make accurate projections. Creating efficient accounting processes
allows an organization to set specific policies and protocols on
accounting procedures, and reach its financial objectives on a regular
basis. Internal accounting controls can help keep track of such areas
as cash-receipt recording, payroll management, appropriate recording
of grants and gifts, cash disbursements by authorized personnel, and
the recording of assets. These systems also can take into account any
government regulations and requirements for financial reporting.
Abstract: There are multiple reasons to expect that detecting the
word order errors in a text will be a difficult problem, and detection
rates reported in the literature are in fact low. Although grammatical
rules constructed by computer linguists improve the performance of
grammar checker in word order diagnosis, the repairing task is still
very difficult. This paper presents an approach for repairing word
order errors in English text by reordering words in a sentence and
choosing the version that maximizes the number of trigram hits
according to a language model. The novelty of this method concerns
the use of an efficient confusion matrix technique for reordering the
words. The comparative advantage of this method is that works with
a large set of words, and avoids the laborious and costly process of
collecting word order errors for creating error patterns.
Abstract: Realistic 3D face model is desired in various
applications such as face recognition, games, avatars, animations, and
etc. Construction of 3D face model is composed of 1) building a face
shape model and 2) rendering the face shape model. Thus, building a
realistic 3D face shape model is an essential step for realistic 3D face
model. Recently, 3D morphable model is successfully introduced to
deal with the various human face shapes. 3D dense correspondence
problem should be precedently resolved for constructing a realistic 3D
dense morphable face shape model. Several approaches to 3D dense
correspondence problem in 3D face modeling have been proposed
previously, and among them optical flow based algorithms and TPS
(Thin Plate Spline) based algorithms are representative. Optical flow
based algorithms require texture information of faces, which is
sensitive to variation of illumination. In TPS based algorithms
proposed so far, TPS process is performed on the 2D projection
representation in cylindrical coordinates of the 3D face data, not
directly on the 3D face data and thus errors due to distortion in data
during 2D TPS process may be inevitable.
In this paper, we propose a new 3D dense correspondence algorithm
for 3D dense morphable face shape modeling. The proposed algorithm
does not need texture information and applies TPS directly on 3D face
data. Through construction procedures, it is observed that the proposed
algorithm constructs realistic 3D face morphable model reliably and
fast.
Abstract: Using DNA microarrays the comparative analysis of a
gene expression profiles is carried out in a liver and kidneys of pigs.
The hypothesis of a cross hybridization of one probe with different
cDNA sites of the same gene or different genes is checked up, and it
is shown, that cross hybridization can be a source of essential errors
at revealing of a key genes in organ-specific transcriptome. It is
reveald that distinctions in profiles of a gene expression are well coordinated
with function, morphology, biochemistry and histology of
these organs.
Abstract: The motivation for adaptive modulation and coding is
to adjust the method of transmission to ensure that the maximum
efficiency is achieved over the link at all times. The receiver
estimates the channel quality and reports it back to the transmitter.
The transmitter then maps the reported quality into a link mode. This
mapping however, is not a one-to-one mapping. In this paper we
investigate a method for selecting the proper modulation scheme.
This method can dynamically adapt the mapping of the Signal-to-
Noise Ratio (SNR) into a link mode. It enables the use of the right
modulation scheme irrespective of changes in the channel conditions
by incorporating errors in the received data. We propose a Markov
model for this method, and use it to derive the average switching
thresholds and the average throughput. We show that the average
throughput of this method outperforms the conventional threshold
method.
Abstract: Traffic density, an indicator of traffic
conditions, is one of the most critical characteristics to
Intelligent Transport Systems (ITS). This paper investigates
recursive traffic density estimation using the information
provided from inductive loop detectors. On the basis of the
phenomenological relationship between speed and density, the
existing studies incorporate a state space model and update the
density estimate using vehicular speed observations via the
extended Kalman filter, where an approximation is made
because of the linearization of the nonlinear observation
equation. In practice, this may lead to substantial estimation
errors. This paper incorporates a suitable transformation to
deal with the nonlinear observation equation so that the
approximation is avoided when using Kalman filter to
estimate the traffic density. A numerical study is conducted. It
is shown that the developed method outperforms the existing
methods for traffic density estimation.
Abstract: This paper reports on investigations into capacity of a
Multiple Input Multiple Output (MIMO) wireless communication
system employing a uniform linear array (ULA) at the transmitter and
either a uniform linear array (ULA) or a uniform circular array (UCA)
antenna at the receiver. The transmitter is assumed to be surrounded by
scattering objects while the receiver is postulated to be free from
scattering objects. The Laplacian distribution of angle of arrival
(AOA) of a signal reaching the receiver is postulated. Calculations of
the MIMO system capacity are performed for two cases without and
with the channel estimation errors. For estimating the MIMO channel,
the scaled least square (SLS) and minimum mean square error
(MMSE) methods are considered.
Abstract: In this study, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive output recurrent cerebellar model articulation control (AORCMAC) and H∞ control technique is proposed for wheeled inverted pendulums (WIPs) real-time control with exact system dynamics unknown. Moreover, a robust H∞ controller is designed to attenuate the effect of the residual approximation errors and external disturbances with desired attenuation level. The experimental results indicate that the WIPs can stand upright stably when using the proposed RIBTC.
Abstract: Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Abstract: To compute dynamic characteristics of nonlinear viscoelastic springs with elastic structures having huge degree-of-freedom, Yamaguchi proposed a new fast numerical method using finite element method [1]-[2]. In this method, restoring forces of the springs are expressed using power series of their elongation. In the expression, nonlinear hysteresis damping is introduced. In this expression, nonlinear complex spring constants are introduced. Finite element for the nonlinear spring having complex coefficients is expressed and is connected to the elastic structures modeled by linear solid finite element. Further, to save computational time, the discrete equations in physical coordinate are transformed into the nonlinear ordinary coupled equations using normal coordinate corresponding to linear natural modes. In this report, the proposed method is applied to simulation for impact responses of a viscoelastic shock absorber with an elastic structure (an S-shaped structure) by colliding with a concentrated mass. The concentrated mass has initial velocities and collides with the shock absorber. Accelerations of the elastic structure and the concentrated mass are measured using Levitation Mass Method proposed by Fujii [3]. The calculated accelerations from the proposed FEM, corresponds to the experimental ones. Moreover, using this method, we also investigate dynamic errors of the S-shaped force transducer due to elastic mode in the S-shaped structure.
Abstract: This paper studies a vital issue in wireless
communications, which is the transmission of images over Wireless
Personal Area Networks (WPANs) through the Bluetooth network. It
presents a simple method to improve the efficiency of error control
code of old Bluetooth versions over mobile WPANs through
Interleaved Error Control Code (IECC) technique. The encoded
packets are interleaved by simple block interleaver. Also, the paper
presents a chaotic interleaving scheme as a tool against bursts of
errors which depends on the chaotic Baker map. Also, the paper
proposes using the chaotic interleaver instead of traditional block
interleaver with Forward Error Control (FEC) scheme. A comparison
study between the proposed and standard techniques for image
transmission over a correlated fading channel is presented.
Simulation results reveal the superiority of the proposed chaotic
interleaving scheme to other schemes. Also, the superiority of FEC
with proposed chaotic interleaver to the conventional interleavers
with enhancing the security level with chaotic interleaving packetby-
packet basis.
Abstract: Current practice of indigenous Mapping production based on GIS, are mostly produced by professional GIS personnel. Given such persons maintain control over data collection and authoring, it is possible to conceive errors due to misrepresentation or cognitive misunderstanding, causing map production inconsistencies. In order to avoid such issues, this research into tribal GIS interface focuses not on customizing interfaces for individual tribes, but rather generalizing the interface and features based on indigenous tribal user needs. The methods employed differs from the traditional expert top-down approach, and instead gaining deeper understanding into indigenous Mappings and user needs, prior to applying mapping techniques and feature development.
Abstract: Many metrics were proposed to evaluate the
characteristics of the analysis and design model of a given product
which in turn help to assess the quality of the product. Function point
metric is a measure of the 'functionality' delivery by the software.
This paper presents an analysis of a set of programs of a project
developed in Cµ through Function Points metric. Function points
are measured for a Data Flow Diagram (DFD) of the case developed
at initial stage. Lines of Codes (LOCs) and possible errors are
calculated with the help of measured Function Points (FPs). The
calculations are performed using suitable established functions.
Calculated LOCs and errors are compared with actual LOCs and
errors found at the time of analysis & design review, implementation
and testing. It has been observed that actual found errors are more
than calculated errors. On the basis of analysis and observations,
authors conclude that function point provides useful insight and helps
to analyze the drawbacks in the development process.