Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.
Abstract: This paper examines many mathematical methods for
molding the hourly price forward curve (HPFC); the model will be
constructed by numerous regression methods, like polynomial
regression, radial basic function neural networks & a furrier series.
Examination the models goodness of fit will be done by means of
statistical & graphical tools. The criteria for choosing the model will
depend on minimize the Root Mean Squared Error (RMSE), using the
correlation analysis approach for the regression analysis the optimal
model will be distinct, which are robust against model
misspecification. Learning & supervision technique employed to
determine the form of the optimal parameters corresponding to each
measure of overall loss. By using all the numerical methods that
mentioned previously; the explicit expressions for the optimal model
derived and the optimal designs will be implemented.
Abstract: Falls are the primary cause of accidents in people over
the age of 65, and frequently lead to serious injuries. Since the early
detection of falls is an important step to alert and protect the aging
population, a variety of research on detecting falls was carried out
including the use of accelerators, gyroscopes and tilt sensors. In
exiting studies, falls were detected using an accelerometer with
errors. In this study, the proposed method for detecting falls was to
use two accelerometers to reject wrong falls detection. As falls are
accompanied by the acceleration of gravity and rotational motion, the
falls in this study were detected by using the z-axial acceleration
differences between two sites. The falls were detected by calculating
the difference between the analyses of accelerometers placed on two
different positions on the chest of the subject. The parameters of the
maximum difference of accelerations (diff_Z) and the integration of
accelerations in a defined region (Sum_diff_Z) were used to form the
fall detection algorithm. The falls and the activities of daily living
(ADL) could be distinguished by using the proposed parameters
without errors in spite of the impact and the change in the positions
of the accelerometers. By comparing each of the axial accelerations,
the directions of falls and the condition of the subject afterwards
could be determined.In this study, by using two accelerometers
without errors attached to two sites to detect falls, the usefulness of
the proposed fall detection algorithm parameters, diff_Z and
Sum_diff_Z, were confirmed.
Abstract: Wireless ad hoc nodes are freely and dynamically
self-organize in communicating with others. Each node can act as
host or router. However it actually depends on the capability of
nodes in terms of its current power level, signal strength, number
of hops, routing protocol, interference and others. In this research,
a study was conducted to observe the effect of hops count over
different network topologies that contribute to TCP Congestion
Control performance degradation. To achieve this objective, a
simulation using NS-2 with different topologies have been
evaluated. The comparative analysis has been discussed based on
standard observation metrics: throughput, delay and packet loss
ratio. As a result, there is a relationship between types of topology
and hops counts towards the performance of ad hoc network. In
future, the extension study will be carried out to investigate the
effect of different error rate and background traffic over same
topologies.
Abstract: In this work, we present a comparison between two
techniques of image compression. In the first case, the image is
divided in blocks which are collected according to zig-zag scan. In
the second one, we apply the Fast Cosine Transform to the image,
and then the transformed image is divided in blocks which are
collected according to zig-zag scan too. Later, in both cases, the
Karhunen-Loève transform is applied to mentioned blocks. On the
other hand, we present three new metrics based on eigenvalues for a
better comparative evaluation of the techniques. Simulations show
that the combined version is the best, with minor Mean Absolute
Error (MAE) and Mean Squared Error (MSE), higher Peak Signal to
Noise Ratio (PSNR) and better image quality. Finally, new technique
was far superior to JPEG and JPEG2000.
Abstract: Saturated hydraulic conductivity of Soil is an
important property in processes involving water and solute flow in
soils. Saturated hydraulic conductivity of soil is difficult to measure
and can be highly variable, requiring a large number of replicate
samples. In this study, 60 sets of soil samples were collected at
Saqhez region of Kurdistan province-IRAN. The statistics such as
Correlation Coefficient (R), Root Mean Square Error (RMSE), Mean
Bias Error (MBE) and Mean Absolute Error (MAE) were used to
evaluation the multiple linear regression models varied with number
of dataset. In this study the multiple linear regression models were
evaluated when only percentage of sand, silt, and clay content (SSC)
were used as inputs, and when SSC and bulk density, Bd, (SSC+Bd)
were used as inputs. The R, RMSE, MBE and MAE values of the 50
dataset for method (SSC), were calculated 0.925, 15.29, -1.03 and
12.51 and for method (SSC+Bd), were calculated 0.927, 15.28,-1.11
and 12.92, respectively, for relationship obtained from multiple
linear regressions on data. Also the R, RMSE, MBE and MAE values
of the 10 dataset for method (SSC), were calculated 0.725, 19.62, -
9.87 and 18.91 and for method (SSC+Bd), were calculated 0.618,
24.69, -17.37 and 22.16, respectively, which shows when number of
dataset increase, precision of estimated saturated hydraulic
conductivity, increases.
Abstract: In this paper, Selective Adaptive Parallel Interference Cancellation (SA-PIC) technique is presented for Multicarrier Direct Sequence Code Division Multiple Access (MC DS-CDMA) scheme. The motivation of using SA-PIC is that it gives high performance and at the same time, reduces the computational complexity required to perform interference cancellation. An upper bound expression of the bit error rate (BER) for the SA-PIC under Rayleigh fading channel condition is derived. Moreover, the implementation complexities for SA-PIC and Adaptive Parallel Interference Cancellation (APIC) are discussed and compared. The performance of SA-PIC is investigated analytically and validated via computer simulations.
Abstract: Compensating physiological motion in the context
of minimally invasive cardiac surgery has become an attractive
issue since it outperforms traditional cardiac procedures offering
remarkable benefits. Owing to space restrictions, computer vision
techniques have proven to be the most practical and suitable solution.
However, the lack of robustness and efficiency of existing methods
make physiological motion compensation an open and challenging
problem. This work focusses on increasing robustness and efficiency
via exploration of the classes of 1−and 2−regularized optimization,
emphasizing the use of explicit regularization. Both approaches are
based on natural features of the heart using intensity information.
Results pointed out the 1−regularized optimization class as the best
since it offered the shortest computational cost, the smallest average
error and it proved to work even under complex deformations.
Abstract: De novo genome assembly is always fragmented. Assembly fragmentation is more serious using the popular next generation sequencing (NGS) data because NGS sequences are shorter than the traditional Sanger sequences. As the data throughput of NGS is high, the fragmentations in assemblies are usually not the result of missing data. On the contrary, the assembled sequences, called contigs, are often connected to more than one other contigs in a complicated manner, leading to the fragmentations. False connections in such complicated connections between contigs, named a contig graph, are inevitable because of repeats and sequencing/assembly errors. Simplifying a contig graph by removing false connections directly improves genome assembly. In this work, we have developed a tool, SIMGraph, to resolve ambiguous connections between contigs using NGS data. Applying SIMGraph to the assembly of a fungus and a fish genome, we resolved 27.6% and 60.3% ambiguous contig connections, respectively. These results can reduce the experimental efforts in resolving contig connections.
Abstract: In this paper we introduce three watermarking methods that can be used to count the number of times that a user has played some content. The proposed methods are tested with audio content in our experimental system using the most common signal processing attacks. The test results show that the watermarking methods used enable the watermark to be extracted under the most common attacks with a low bit error rate.
Abstract: This paper proposes an efficient lattice-reduction-aided
detection (LRD) scheme to improve the detection performance of
MIMO-OFDM system. In this proposed scheme, V candidate symbols
are considered at the first layer, and V probable streams are
detected with LRD scheme according to the first detected V candidate
symbols. Then, the most probable stream is selected through a ML
test. Since the proposed scheme can more accurately detect initial
symbol and can reduce transmission of error to rest symbols, the
proposed scheme shows more improved performance than conventional
LRD with very low complexity.
Abstract: The objective of positioning the fixture elements in
the fixture is to make the workpiece stiff, so that geometric errors in
the manufacturing process can be reduced. Most of the work for
optimal fixture layout used the minimization of the sum of the nodal
deflection normal to the surface as objective function. All deflections
in other direction have been neglected. We propose a new method for
fixture layout optimization in this paper, which uses the element
strain energy. The deformations in all the directions have been
considered in this way. The objective function in this method is to
minimize the sum of square of element strain energy. Strain energy
and stiffness are inversely proportional to each other. The
optimization problem is solved by the sequential quadratic
programming method. Three different kinds of case studies are
presented, and results are compared with the method using nodal
deflections as objective function to verify the propose method.
Abstract: A clustering based technique has been developed and implemented for Short Term Load Forecasting, in this article. Formulation has been done using Mean Absolute Percentage Error (MAPE) as an objective function. Data Matrix and cluster size are optimization variables. Model designed, uses two temperature variables. This is compared with six input Radial Basis Function Neural Network (RBFNN) and Fuzzy Inference Neural Network (FINN) for the data of the same system, for same time period. The fuzzy inference system has the network structure and the training procedure of a neural network which initially creates a rule base from existing historical load data. It is observed that the proposed clustering based model is giving better forecasting accuracy as compared to the other two methods. Test results also indicate that the RBFNN can forecast future loads with accuracy comparable to that of proposed method, where as the training time required in the case of FINN is much less.
Abstract: To satisfy the need of outfield tests of star sensors, a
method is put forward to construct the reference attitude benchmark.
Firstly, its basic principle is introduced; Then, all the separate
conversion matrixes are deduced, which include: the conversion
matrix responsible for the transformation from the Earth Centered
Inertial frame i to the Earth-centered Earth-fixed frame w according to
the time of an atomic clock, the conversion matrix from frame w to the
geographic frame t, and the matrix from frame t to the platform frame
p, so the attitude matrix of the benchmark platform relative to the
frame i can be obtained using all the three matrixes as the
multiplicative factors; Next, the attitude matrix of the star sensor
relative to frame i is got when the mounting matrix from frame p to the
star sensor frame s is calibrated, and the reference attitude angles for
star sensor outfield tests can be calculated from the transformation
from frame i to frame s; Finally, the computer program is finished to
solve the reference attitudes, and the error curves are drawn about the
three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│.
The analysis on each loop and the final simulating results manifest that
the method by precise timing to acquire the absolute reference attitude
is feasible for star sensor outfield tests.
Abstract: We study a new technique for optimal data compression
subject to conditions of causality and different types of memory. The
technique is based on the assumption that some information about
compressed data can be obtained from a solution of the associated
problem without constraints of causality and memory. This allows
us to consider two separate problem related to compression and decompression
subject to those constraints. Their solutions are given
and the analysis of the associated errors is provided.
Abstract: Erroneous computer entry problems [here: 'e'errors] in hospital labs threaten the patients-–health carers- relationship, undermining the health system credibility. Are e-errors random, and do lab professionals make them accidentally, or may they be traced through meaningful determinants? Theories on internal causality of mistakes compel to seek specific causal ascriptions of hospital lab eerrors instead of accepting some inescapability. Undeniably, 'To Err is Human'. But in view of rapid global health organizational changes, e-errors are too expensive to lack in-depth considerations. Yet, that efunction might supposedly be entrenched in the health carers- job description remains under dispute – at least for Hellenic labs, where e-use falls behind generalized(able) appreciation and application. In this study: i) an empirical basis of a truly high annual cost of e-errors at about €498,000.00 per rural Hellenic hospital was established, hence interest in exploring the issue was sufficiently substantiated; ii) a sample of 270 lab-expert nurses, technicians and doctors were assessed on several personality, burnout and e-error measures, and iii) the hypothesis that the Hardiness vs Alienation personality construct disposition explains resistance vs proclivity to e-errors was tested and verified: Hardiness operates as a resilience source in the encounter of high pressures experienced in the hospital lab, whereas its 'opposite', i.e., Alienation, functions as a predictor, not only of making e-errors, but also of leading to burn-out. Implications for apt interventions are discussed.
Abstract: When choosing marketing strategies for international markets, one of the factors that should be considered is the cultural differences that exist among consumers in different countries. If the branding strategy has to be contextual and in tune with the culture, then the brand positioning variables has to interact, adapt and respond to the cultural variables in which the brand is operating. This study provides an overview of the relevance of culture in the development of an effective branding strategy in the international business environment. Hence, the main objective of this study is to provide a managerial framework for developing strategies for cross cultural brand management. The framework is useful because it incorporates the variables that are important in the competitiveness of fast food enterprises irrespective of their size. It provides practical, proactive and result oriented analysis that will help fast food firms augment their strategies in the international fast food markets. The proposed framework will enable managers understand the intricacies involved in branding in the global fast food industry and decrease the use of 'trial and error' when entering into unfamiliar markets.
Abstract: Software Reliability is one of the key factors in the software development process. Software Reliability is estimated using reliability models based on Non Homogenous Poisson Process. In most of the literature the Software Reliability is predicted only in testing phase. So it leads to wrong decision-making concept. In this paper, two Software Reliability concepts, testing and operational phase are studied in detail. Using S-Shaped Software Reliability Growth Model (SRGM) and Exponential SRGM, the testing and operational reliability values are obtained. Finally two reliability values are compared and optimal release time is investigated.