Development of a Real-Time Energy Models for Photovoltaic Water Pumping System

This purpose of this paper is to develop and validate a model to accurately predict the cell temperature of a PV module that adapts to various mounting configurations, mounting locations, and climates while only requiring readily available data from the module manufacturer. Results from this model are also compared to results from published cell temperature models. The models were used to predict real-time performance from a PV water pumping systems in the desert of Medenine, south of Tunisia using 60-min intervals of measured performance data during one complete year. Statistical analysis of the predicted results and measured data highlight possible sources of errors and the limitations and/or adequacy of existing models, to describe the temperature and efficiency of PV-cells and consequently, the accuracy of performance of PV water pumping systems prediction models.

Nonlinear Fuzzy Tracking Real-time-based Control of Drying Parameters

The highly nonlinear characteristics of drying processes have prompted researchers to seek new nonlinear control solutions. However, the relation between the implementation complexity, on-line processing complexity, reliability control structure and controller-s performance is not well established. The present paper proposes high performance nonlinear fuzzy controllers for a real-time operation of a drying machine, being developed under a consistent match between those issues. A PCI-6025E data acquisition device from National Instruments® was used, and the control system was fully designed with MATLAB® / SIMULINK language. Drying parameters, namely relative humidity and temperature, were controlled through MIMOs Hybrid Bang-bang+PI (BPI) and Four-dimensional Fuzzy Logic (FLC) real-time-based controllers to perform drying tests on biological materials. The performance of the drying strategies was compared through several criteria, which are reported without controllers- retuning. Controllers- performance analysis has showed much better performance of FLC than BPI controller. The absolute errors were lower than 8,85 % for Fuzzy Logic Controller, about three times lower than the experimental results with BPI control.

Modeling Concave Globoidal Cam with Swinging Roller Follower : A Case Study

This paper describes a computer-aided design for design of the concave globoidal cam with cylindrical rollers and swinging follower. Four models with different modeling methods are made from the same input data. The input data are angular input and output displacements of the cam and the follower and some other geometrical parameters of the globoidal cam mechanism. The best cam model is the cam which has no interference with the rollers when their motions are simulated in assembly conditions. The angular output displacement of the follower for the best cam is also compared with that of in the input data to check errors. In this study, Pro/ENGINEER® Wildfire 2.0 is used for modeling the cam, simulating motions and checking interference and errors of the system.

An Evaluation Method for Two-Dimensional Position Errors and Assembly Errors of a Rotational Table on a 4 Axis Machine Tool

This paper describes a method to measure and compensate a 4 axes ultra-precision machine tool that generates micro patterns on the large surfaces. The grooving machine is usually used for making a micro mold for many electrical parts such as a light guide plate for LCD and fuel cells. The ultra precision machine tool has three linear axes and one rotational table. Shaping is usually used to generate micro patterns. In the case of 50 μm pitch and 25 μm height pyramid pattern machining with a 90° wedge angle bite, one of linear axis is used for long stroke motion for high cutting speed and other linear axis are used for feeding. The triangular patterns can be generated with many times of long stroke of one axis. Then 90° rotation of work piece is needed to make pyramid patterns with superposition of machined two triangular patterns. To make a two dimensional positioning error, straightness of two axes in out of plane, squareness between the each axis are important. Positioning errors, straightness and squarness were measured by laser interferometer system. Those were compensated and confirmed by ISO230-6. One of difficult problem to measure the error motions is squareness or parallelism of axis between the rotational table and linear axis. It was investigated by simultaneous moving of rotary table and XY axes. This compensation method is introduced in this paper.

The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.

Program Memories Error Detection and Correction On-Board Earth Observation Satellites

Memory Errors Detection and Correction aim to secure the transaction of data between the central processing unit of a satellite onboard computer and its local memory. In this paper, the application of a double-bit error detection and correction method is described and implemented in Field Programmable Gate Array (FPGA) technology. The performance of the proposed EDAC method is measured and compared with two different EDAC devices, using the same FPGA technology. Statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in commercial memories onboard the first Algerian microsatellite Alsat-1 is given.

Influence of Noise on the Inference of Dynamic Bayesian Networks from Short Time Series

In this paper we investigate the influence of external noise on the inference of network structures. The purpose of our simulations is to gain insights in the experimental design of microarray experiments to infer, e.g., transcription regulatory networks from microarray experiments. Here external noise means, that the dynamics of the system under investigation, e.g., temporal changes of mRNA concentration, is affected by measurement errors. Additionally to external noise another problem occurs in the context of microarray experiments. Practically, it is not possible to monitor the mRNA concentration over an arbitrary long time period as demanded by the statistical methods used to learn the underlying network structure. For this reason, we use only short time series to make our simulations more biologically plausible.

Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications

With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.

A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data

The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.

Adaptive Neural Network Control of Autonomous Underwater Vehicles

An adaptive neural network controller for autonomous underwater vehicles (AUVs) is presented in this paper. The AUV model is highly nonlinear because of many factors, such as hydrodynamic drag, damping, and lift forces, Coriolis and centripetal forces, gravity and buoyancy forces, as well as forces from thruster. In this regards, a nonlinear neural network is used to approximate the nonlinear uncertainties of AUV dynamics, thus overcoming some limitations of conventional controllers and ensure good performance. The uniform ultimate boundedness of AUV tracking errors and the stability of the proposed control system are guaranteed based on Lyapunov theory. Numerical simulation studies for motion control of an AUV are performed to demonstrate the effectiveness of the proposed controller.

Topology Preservation in SOM

The SOM has several beneficial features which make it a useful method for data mining. One of the most important features is the ability to preserve the topology in the projection. There are several measures that can be used to quantify the goodness of the map in order to obtain the optimal projection, including the average quantization error and many topological errors. Many researches have studied how the topology preservation should be measured. One option consists of using the topographic error which considers the ratio of data vectors for which the first and second best BMUs are not adjacent. In this work we present a study of the behaviour of the topographic error in different kinds of maps. We have found that this error devaluates the rectangular maps and we have studied the reasons why this happens. Finally, we suggest a new topological error to improve the deficiency of the topographic error.

Stock Market Integration Measurement: Investigation of Malaysia and Singapore Stock Markets

This paper tests the level of market integration between Malaysia and Singapore stock markets with the world market. Kalman Filter (KF) methodology is used on the International Capital Asset Pricing Model (ICAPM) and the pricing errors estimated within the framework of ICAPM are used as a measure of market integration or segmentation. The advantage of the KF technique is that it allows for time-varying coefficients in estimating ICAPM and hence able to capture the varying degree of market integration. Empirical results show clear evidence of varying degree of market integration for both case of Malaysia and Singapore. Furthermore, the results show that the changes in the level of market integration are found to coincide with certain economic events that have taken placed. The findings certainly provide evidence on the practicability of the KF technique to estimate stock markets integration. In the comparison between Malaysia and Singapore stock market, the result shows that the trends of the market integration indices for Malaysia and Singapore look similar through time but the magnitude is notably different with the Malaysia stock market showing greater degree of market integration. Finally, significant evidence of varying degree of market integration shows the inappropriate use of OLS in estimating the level of market integration.

Minimizing of Target Localization Error using Multi-robot System and Particle Filters

In recent years a number of applications with multirobot systems (MRS) is growing in various areas. But their design is in practice often difficult and algorithms are proposed for the theoretical background and do not consider errors and noise in real conditions, so they are not usable in real environment. These errors are visible also in task of target localization enough, when robots try to find and estimate the position of the target by the sensors. Localization of target is possible also with one robot but as it was examined target finding and localization with group of mobile robots can estimate the target position more accurately and faster. The accuracy of target position estimation is made by cooperation of MRS and particle filtering. Advantage of usage the MRS with particle filtering was tested on task of fixed target localization by group of mobile robots.

DRE - A Quality Metric for Component based Software Products

The overriding goal of software engineering is to provide a high quality system, application or a product. To achieve this goal, software engineers must apply effective methods coupled with modern tools within the context of a mature software process [2]. In addition, it is also must to assure that high quality is realized. Although many quality measures can be collected at the project levels, the important measures are errors and defects. Deriving a quality measure for reusable components has proven to be challenging task now a days. The results obtained from the study are based on the empirical evidence of reuse practices, as emerged from the analysis of industrial projects. Both large and small companies, working in a variety of business domains, and using object-oriented and procedural development approaches contributed towards this study. This paper proposes a quality metric that provides benefit at both project and process level, namely defect removal efficiency (DRE).

Evaluation of Optimum Performance of Lateral Intakes

In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.

Calculation of Density for Refrigerant Mixtures in Sub Critical Regions for Use in the Buildings

Accurate and comprehensive thermodynamic properties of pure and mixture of refrigerants are in demand by both producers and users of these materials. Information about thermodynamic properties is important initially to qualify potential candidates for working fluids in refrigeration machinery. From practical point of view, Refrigerants and refrigerant mixtures are widely used as working fluids in many industrial applications, such as refrigerators, heat pumps, and power plants The present work is devoted to evaluating seven cubic equations of state (EOS) in predicting gas and liquid phase volumetric properties of nine ozone-safe refrigerants both in super and sub-critical regions. The evaluations, in sub-critical region, show that TWU and PR EOS are capable of predicting PVT properties of refrigerants R32 within 2%, R22, R134a, R152a and R143a within 1% and R123, R124, R125, TWU and PR EOS's, from literature data are 0.5% for R22, R32, R152a, R143a, and R125, 1% for R123, R134a, and R141b, and 2% for R124. Moreover, SRK EOS predicts PVT properties of R22, R125, and R123 to within aforementioned errors. The remaining EOS's predicts volumetric properties of this class of fluids with higher errors than those above mentioned which are at most 8%.In general, the results are in favor of the preference of TWU and PR EOS over other remaining EOS's in predicting densities of all mentioned refrigerants in both super and sub critical regions. Typically, this refrigerant is known to offer advantages such as ozone depleting potential equal to zero, Global warming potential equal to 140, and no toxic.

A Contribution to 3D Modeling of Manufacturing Tolerance Optimization

The study of the generated defects on manufactured parts shows the difficulty to maintain parts in their positions during the machining process and to estimate them during the pre-process plan. This work presents a contribution to the development of 3D models for the optimization of the manufacturing tolerances. An experimental study allows the measurement of the defects of part positioning for the determination of ε and the choice of an optimal setup of the part. An approach of 3D tolerance based on the small displacements method permits the determination of the manufacturing errors upstream. A developed tool, allows an automatic generation of the tolerance intervals along the three axes.

Optimization of Inverse Kinematics of a 3R Robotic Manipulator using Genetic Algorithms

In this paper the direct kinematic model of a multiple applications three degrees of freedom industrial manipulator, was developed using the homogeneous transformation matrices and the Denavit - Hartenberg parameters, likewise the inverse kinematic model was developed using the same method, verifying that in the workload border the inverse kinematic presents considerable errors, therefore a genetic algorithm was implemented to optimize the model improving greatly the efficiency of the model.

Analysis of MAC Protocols with Correlation Receiver for OCDMA Networks - Part II

In this paper optical code-division multiple-access (OCDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of onedimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compute and compare packet-success probability for 1-D and 2-D codes in an O-CDMA network and the analysis shows improved performance with 2-D codes as compared to 1-D codes.

Computer - based Systems for High Speed Vessels Navigators – Engineers Training

With high speed vessels getting ever more sophisti-cated, travelling at higher and higher speeds and operating in With high speed vessels getting ever more sophisticated, travelling at higher and higher speeds and operating in areas of high maritime traffic density, training becomes of the highest priority to ensure that safety levels are maintained, and risks are adequately mitigated. Training onboard the actual craft on the actual route still remains the most effective way for crews to gain experience. However, operational experience and incidents during the last 10 years demonstrate the need for supplementary training whether in the area of simulation or man to man, man/ machine interaction. Training and familiarisation of the crew is the most important aspect in preventing incidents. The use of simulator, computer and web based training systems in conjunction with onboard training focusing on critical situations will improve the man machine interaction and thereby reduce the risk of accidents. Today, both ship simulator and bridge teamwork courses are now becoming the norm in order to improve further emergency response and crisis management skills. One of the main causes of accidents is the human factor. An efficient way to reduce human errors is to provide high-quality training to the personnel and to select the navigators carefully.areas of high maritime traffic density, training becomes of the highest priority to ensure that safety levels are maintained, and risks are adequately mitigated. Training onboard the actual craft on the actual route still remains the most effective way for crews to gain experience. How-ever, operational experience and incidents during the last 10 years demonstrate the need for supplementary training whether in the area of simulation or man to man, man/ machine interaction. Training and familiarisation of the crew is the most important aspect in preventing incidents. The use of simulator, computer and web based training systems in conjunction with onboard training focusing on critical situations will improve the man machine interaction and thereby reduce the risk of accidents. Today, both ship simulator and bridge teamwork courses are now becoming the norm in order to improve further emergency response and crisis management skills. One of the main causes of accidents is the human factor. An efficient way to reduce human errors is to provide high-quality training to the person-nel and to select the navigators carefully. KeywordsCBT - WBT systems, Human factors.