Abstract: In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.
Abstract: Noise contamination in a magnetic resonance (MR)
image could occur during acquisition, storage, and transmission in
which effective filtering is required to avoid repeating the MR
procedure. In this paper, an iterative asymmetrical triangle fuzzy
filter with moving average center (ATMAVi filter) is used to reduce
different levels of salt and pepper noise in a brain MR image. Besides
visual inspection on filtered images, the mean squared error (MSE) is
used as an objective measurement. When compared with the median
filter, simulation results indicate that the ATMAVi filter is effective
especially for filtering a higher level noise (such as noise density =
0.45) using a smaller window size (such as 3x3) when operated
iteratively or using a larger window size (such as 5x5) when operated
non-iteratively.
Abstract: The advantage of solving the complex nonlinear
problems by utilizing fuzzy logic methodologies is that the
experience or expert-s knowledge described as a fuzzy rule base can
be directly embedded into the systems for dealing with the problems.
The current limitation of appropriate and automated designing of
fuzzy controllers are focused in this paper. The structure discovery
and parameter adjustment of the Branched T-S fuzzy model is
addressed by a hybrid technique of type constrained sparse tree
algorithms. The simulation result for different system model is
evaluated and the identification error is observed to be minimum.
Abstract: The problem of manipulator control is a highly
complex problem of controlling a system which is multi-input, multioutput,
non-linear and time variant. In this paper some adaptive
fuzzy, and a new hybrid fuzzy control algorithm have been
comparatively evaluated through simulations, for manipulator
control. The adaptive fuzzy controllers consist of self-organizing,
self-tuning, and coarse/fine adaptive fuzzy schemes. These
controllers are tested for different trajectories and for varying
manipulator parameters through simulations. Various performance
indices like the RMS error, steady state error and maximum error are
used for comparison. It is observed that the self-organizing fuzzy
controller gives the best performance. The proposed hybrid fuzzy
plus integral error controller also performs remarkably well, given its
simple structure.
Abstract: Accurate and comprehensive thermodynamic properties of pure and mixture of refrigerants are in demand by both producers and users of these materials. Information about thermodynamic properties is important initially to qualify potential candidates for working fluids in refrigeration machinery. From practical point of view, Refrigerants and refrigerant mixtures are widely used as working fluids in many industrial applications, such as refrigerators, heat pumps, and power plants The present work is devoted to evaluating seven cubic equations of state (EOS) in predicting gas and liquid phase volumetric properties of nine ozone-safe refrigerants both in super and sub-critical regions. The evaluations, in sub-critical region, show that TWU and PR EOS are capable of predicting PVT properties of refrigerants R32 within 2%, R22, R134a, R152a and R143a within 1% and R123, R124, R125, TWU and PR EOS's, from literature data are 0.5% for R22, R32, R152a, R143a, and R125, 1% for R123, R134a, and R141b, and 2% for R124. Moreover, SRK EOS predicts PVT properties of R22, R125, and R123 to within aforementioned errors. The remaining EOS's predicts volumetric properties of this class of fluids with higher errors than those above mentioned which are at most 8%.In general, the results are in favor of the preference of TWU and PR EOS over other remaining EOS's in predicting densities of all mentioned refrigerants in both super and sub critical regions. Typically, this refrigerant is known to offer advantages such as ozone depleting potential equal to zero, Global warming potential equal to 140, and no toxic.
Abstract: The study of the generated defects on manufactured
parts shows the difficulty to maintain parts in their positions during
the machining process and to estimate them during the pre-process
plan. This work presents a contribution to the development of 3D
models for the optimization of the manufacturing tolerances. An
experimental study allows the measurement of the defects of part
positioning for the determination of ε and the choice of an optimal
setup of the part. An approach of 3D tolerance based on the small
displacements method permits the determination of the
manufacturing errors upstream. A developed tool, allows an
automatic generation of the tolerance intervals along the three axes.
Abstract: The new semi-experimental method for simulation of
the turbine flow meters rotation in the transitional flow has been
developed. The method is based on the experimentally established
exponential low of changing of dimensionless relative turbine gas
meter rotation frequency and meter inertia time constant. For
experimental evaluation of the meter time constant special facility
has been developed. The facility ensures instant switching of turbine
meter under test from one channel to the other channel with different
flow rate and measuring the meter response. The developed method
can be used for evaluation and predication of the turbine meters
response and dynamic error in the transitional flow with any arbitrary
law of flow rate changing. The examples of the method application
are presented.
Abstract: In this paper, we suggest new product-type estimators for the population mean of the variable of interest exploiting the first or the third quartile of the auxiliary variable. We obtain mean square error equations and the bias for the estimators. We study the properties of these estimators using simple random sampling (SRS) and ranked set sampling (RSS) methods. It is found that, SRS and RSS produce approximately unbiased estimators of the population mean. However, the RSS estimators are more efficient than those obtained using SRS based on the same number of measured units for all values of the correlation coefficient.
Abstract: In this paper the direct kinematic model of a multiple
applications three degrees of freedom industrial manipulator, was
developed using the homogeneous transformation matrices and the
Denavit - Hartenberg parameters, likewise the inverse kinematic
model was developed using the same method, verifying that in the
workload border the inverse kinematic presents considerable errors,
therefore a genetic algorithm was implemented to optimize the model
improving greatly the efficiency of the model.
Abstract: In this paper optical code-division multiple-access (OCDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of onedimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compute and compare packet-success probability for 1-D and 2-D codes in an O-CDMA network and the analysis shows improved performance with 2-D codes as compared to 1-D codes.
Abstract: Time series analysis often requires data that represents
the evolution of an observed variable in equidistant time steps. In
order to collect this data sampling is applied. While continuous
signals may be sampled, analyzed and reconstructed applying
Shannon-s sampling theorem, time-discrete signals have to be dealt
with differently. In this article we consider the discrete-event
simulation (DES) of job-shop-systems and study the effects of
different sampling rates on data quality regarding completeness and
accuracy of reconstructed inventory evolutions. At this we discuss
deterministic as well as non-deterministic behavior of system
variables. Error curves are deployed to illustrate and discuss the
sampling rate-s impact and to derive recommendations for its wellfounded
choice.
Abstract: Classical Bose-Chaudhuri-Hocquenghem (BCH) codes C that contain their dual codes can be used to construct quantum stabilizer codes this chapter studies the properties of such codes. It had been shown that a BCH code of length n which contains its dual code satisfies the bound on weight of any non-zero codeword in C and converse is also true. One impressive difficulty in quantum communication and computation is to protect informationcarrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum errorcorrecting codes have been derived as binary stabilizer codes. We were able to shed more light on the structure of dual containing BCH codes. These results make it possible to determine the parameters of quantum BCH codes in terms of weight of non-zero dual codeword.
Abstract: We identify clawback triggers from firms- proxy
statements (Form DEF 14A) and use the likelihood of restatements to
proxy for financial reporting quality. Based on a sample of 578 U.S.
firms that voluntarily adopt clawback provisions during 2003-2009,
when restatement-based triggers could be decomposed into two types:
fraud and unintentional error, and we do observe the evidence that
using fraud triggers is associated with high financial reporting quality.
The findings support that fraud triggers can enhance deterrent effect of
clawback provision by establishing a viable disincentive against fraud,
misconduct, and otherwise harmful acts. These results are robust to
controlling for the compensation components, to different sample
specifications and to a number of sensitivity.
Abstract: Wheat prediction was carried out using different meteorological variables together with agro meteorological indices in Ardebil district for the years 2004-2005 & 2005–2006. On the basis of correlation coefficients, standard error of estimate as well as relative deviation of predicted yield from actual yield using different statistical models, the best subset of agro meteorological indices were selected including daily minimum temperature (Tmin), accumulated difference of maximum & minimum temperatures (TD), growing degree days (GDD), accumulated water vapor pressure deficit (VPD), sunshine hours (SH) & potential evapotranspiration (PET). Yield prediction was done two months in advance before harvesting time which was coincide with commencement of reproductive stage of wheat (5th of June). It revealed that in the final statistical models, 83% of wheat yield variability was accounted for variation in above agro meteorological indices.
Abstract: With high speed vessels getting ever more sophisti-cated, travelling at higher and higher speeds and operating in With high speed vessels getting ever more sophisticated,
travelling at higher and higher speeds and operating in areas of
high maritime traffic density, training becomes of the highest priority
to ensure that safety levels are maintained, and risks are adequately
mitigated. Training onboard the actual craft on the actual route still
remains the most effective way for crews to gain experience. However,
operational experience and incidents during the last 10 years
demonstrate the need for supplementary training whether in the area
of simulation or man to man, man/ machine interaction. Training and
familiarisation of the crew is the most important aspect in preventing
incidents. The use of simulator, computer and web based training
systems in conjunction with onboard training focusing on critical
situations will improve the man machine interaction and thereby
reduce the risk of accidents. Today, both ship simulator and bridge
teamwork courses are now becoming the norm in order to improve
further emergency response and crisis management skills. One of the
main causes of accidents is the human factor. An efficient way to
reduce human errors is to provide high-quality training to the personnel
and to select the navigators carefully.areas of high maritime traffic density, training becomes of the highest priority to ensure that safety levels are maintained, and risks are adequately mitigated. Training onboard the actual craft on the actual route still remains the most effective way for crews to gain experience. How-ever, operational experience and incidents during the last 10 years demonstrate the need for supplementary training whether in the area of simulation or man to man, man/ machine interaction. Training and familiarisation of the crew is the most important aspect in preventing incidents. The use of simulator, computer and web based training systems in conjunction with onboard training focusing on critical situations will improve the man machine interaction and thereby reduce the risk of accidents. Today, both ship simulator and bridge teamwork courses are now becoming the norm in order to improve further emergency response and crisis management skills. One of the main causes of accidents is the human factor. An efficient way to reduce human errors is to provide high-quality training to the person-nel and to select the navigators carefully. KeywordsCBT - WBT systems, Human factors.
Abstract: This paper is concerned with studying the forgetting factor of the recursive least square (RLS). A new dynamic forgetting factor (DFF) for RLS algorithm is presented. The proposed DFF-RLS is compared to other methods. Better performance at convergence and tracking of noisy chirp sinusoid is achieved. The control of the forgetting factor at DFF-RLS is based on the gradient of inverse correlation matrix. Compared with the gradient of mean square error algorithm, the proposed approach provides faster tracking and smaller mean square error. In low signal-to-noise ratios, the performance of the proposed method is superior to other approaches.
Abstract: This paper presents a new adaptive impedance control
strategy, based on Function Approximation Technique (FAT) to
compensate for unknown non-flat environment shape or time-varying
environment location. The target impedance in the force controllable
direction is modified by incorporating adaptive compensators and the
uncertainties are represented by FAT, allowing the update law to be
derived easily. The force error feedback is utilized in the estimation
and the accurate knowledge of the environment parameters are not
required by the algorithm. It is shown mathematically that the
stability of the controller is guaranteed based on Lyapunov theory.
Simulation results presented to demonstrate the validity of the
proposed controller.
Abstract: A new power regulator controller with multiple-access
PID compensator is proposed, which can achieve a minimum memory
requirement for fully table look-up. The proposed regulator controller
employs hysteresis comparators, an error process unit (EPU) for
voltage regulation, a multiple-access PID compensator and a lowpower-
consumption digital PWM (DPWM). Based on the multipleaccess
mechanism, the proposed controller can alleviate the penalty of
large amount of memory employed for fully table look-up based PID
compensator in the applications of power regulation. The proposed
controller has been validated with simulation results.
Abstract: This paper introduces a new approach for the performance
analysis of adaptive filter with error saturation nonlinearity in
the presence of impulsive noise. The performance analysis of adaptive
filters includes both transient analysis which shows that how fast
a filter learns and the steady-state analysis gives how well a filter
learns. The recursive expressions for mean-square deviation(MSD)
and excess mean-square error(EMSE) are derived based on weighted
energy conservation arguments which provide the transient behavior
of the adaptive algorithm. The steady-state analysis for co-related
input regressor data is analyzed, so this approach leads to a new
performance results without restricting the input regression data to
be white.
Abstract: In real-field applications, the correct determination of voice segments highly improves the overall system accuracy and minimises the total computation time. This paper presents reliable measures of speech compression by detcting the end points of the speech signals prior to compressing them. The two different compession schemes used are the Global threshold and the Level- Dependent threshold techniques. The performance of the proposed method is tested wirh the Signal to Noise Ratios, Peak Signal to Noise Ratios and Normalized Root Mean Square Error parameter measures.