Abstract: This paper presents a new version of the SVM mixture algorithm initially proposed by Kwok for classification and regression problems. For both cases, a slight modification of the mixture model leads to a standard SVM training problem, to the existence of an exact solution and allows the direct use of well known decomposition and working set selection algorithms. Only the regression case is considered in this paper but classification has been addressed in a very similar way. This method has been successfully applied to engine pollutants emission modeling.
Abstract: The purpose of the study was to investigate the
effectiveness of ICT training workshop of tutors of Allama Iqbal
Open University Pakistan. The study was delimited to tutors of
Multan region. The total sample comprised of 100 tutors. All the
tutors who participated in ICT training workshop in Multan region
were taken as sample in the study. A questionnaire having two parts,
based on five point rating scale was developed by the researcher. Part
one was about the competency level of computer skills while Part
two was based on items related to training delivery, structure and
content. Part One of questionnaire had five levels of competency
about computer skills. The questionnaire was personally administered
and collected back by the researcher himself on the last day of
workshop.
The collected data were analyzed by using descriptive statistics.
Through this study it was found that majority of the tutors strongly
agreed that training enhanced their computer skills. Majority of the
respondents consider themselves to be generally competent in the use
of computer. They also agreed that there was appropriate
infrastructure and technical support in lab during training workshop.
Moreover, it was found that the training imparted the knowledge of
pedagogy of using computers for distance education.
Abstract: This paper presents implementation of attitude controller for a small UAV using field programmable gate array (FPGA). Due to the small size constrain a miniature more compact and computationally extensive; autopilot platform is needed for such systems. More over UAV autopilot has to deal with extremely adverse situations in the shortest possible time, while accomplishing its mission. FPGAs in the recent past have rendered themselves as fast, parallel, real time, processing devices in a compact size. This work utilizes this fact and implements different attitude controllers for a small UAV in FPGA, using its parallel processing capabilities. Attitude controller is designed in MATLAB/Simulink environment. The discrete version of this controller is implemented using pipelining followed by retiming, to reduce the critical path and thereby clock period of the controller datapath. Pipelined, retimed, parallel PID controller implementation is done using rapidprototyping and testing efficient development tool of “system generator", which has been developed by Xilinx for FPGA implementation. The improved timing performance enables the controller to react abruptly to any changes made to the attitudes of UAV.
Abstract: The necessity of ever-increasing use of distributed
data in computer networks is obvious for all. One technique that is
performed on the distributed data for increasing of efficiency and
reliablity is data rplication. In this paper, after introducing this
technique and its advantages, we will examine some dynamic data
replication. We will examine their characteristies for some overus
scenario and the we will propose some suggestion for their
improvement.
Abstract: The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Abstract: The increasing demand for higher data rates in wireless communication systems has led to the more effective and efficient use of all allocated frequency bands. In order to use the whole bandwidth at maximum efficiency, one needs to have RF power amplifiers with a higher linear level and memory-less performance. This is considered to be a major challenge to circuit designers. In this thesis the linearity and memory are studied and examined via the behavior of the intermodulation distortion (IMD). A major source of the in-band distortion can be shown to be influenced by the out-of-band impedances presented at either the input or the output of the device, especially those impedances terminated the low frequency (IF) components. Thus, in order to regulate the in-band distortion, the out of-band distortion must be controllable. These investigations are performed on a 12W LDMOS device characterised at 2.1 GHz within a purpose built, high-power measurement system.
Abstract: A new topology of unified power quality conditioner
(UPQC) is proposed for different power quality (PQ) improvement in
a three-phase four-wire (3P-4W) distribution system. For neutral
current mitigation, a star-hexagon transformer is connected in shunt
near the load along with three-leg voltage source inverters (VSIs)
based UPQC. For the mitigation of source neutral current, the uses of
passive elements are advantageous over the active compensation due
to ruggedness and less complexity of control. In addition to this, by
connecting a star-hexagon transformer for neutral current mitigation
the over all rating of the UPQC is reduced. The performance of the
proposed topology of 3P-4W UPQC is evaluated for power-factor
correction, load balancing, neutral current mitigation and mitigation
of voltage and currents harmonics. A simple control algorithm based
on Unit Vector Template (UVT) technique is used as a control
strategy of UPQC for mitigation of different PQ problems. In this
control scheme, the current/voltage control is applied over the
fundamental supply currents/voltages instead of fast changing APFs
currents/voltages, thereby reducing the computational delay.
Moreover, no extra control is required for neutral source current
compensation; hence the numbers of current sensors are reduced. The
performance of the proposed topology of UPQC is analyzed through
simulations results using MATLAB software with its Simulink and
Power System Block set toolboxes.
Abstract: Symbolic dynamics studies dynamical systems on the basis of the symbol sequences obtained for a suitable partition of the state space. This approach exploits the property that system dynamics reduce to a shift operation in symbol space. This shift operator is a chaotic mapping. In this article we show that in the symbol space exist other chaotic mappings.
Abstract: The interactions between input/output variables are a very common phenomenon encountered in the design of multi-loop controllers for interacting multivariable processes, which can be a serious obstacle for achieving a good overall performance of multiloop control system. To overcome this impediment, the decomposed dynamic interaction analysis is proposed by decomposing the multiloop control system into a set of n independent SISO systems with the corresponding effective open-loop transfer function (EOTF) within the dynamic interactions embedded explicitly. For each EOTF, the reduced model is independently formulated by using the proposed reduction design strategy, and then the paired multi-loop proportional-integral-derivative (PID) controller is derived quite simply and straightforwardly by using internal model control (IMC) theory. This design method can easily be implemented for various industrial processes because of its effectiveness. Several case studies are considered to demonstrate the superior of the proposed method.
Abstract: Large metal and concrete structures suffer by various kinds of deterioration, and accurate prediction of the remaining life is important. This paper informs about two methods for its assessment. One method, suitable for steel bridges and other constructions exposed to fatigue, monitors the loads and damage accumulation using information systems for the operation and the finite element model of the construction. In addition to the operation load, the dead weight of the construction and thermal stresses can be included into the model. The second method is suitable for concrete bridges and other structures, which suffer by carbonatation and other degradation processes, driven by diffusion. The diffusion constant, important for the prediction of future development, can be determined from the depth-profile of pH, obtained by pH measurement at various depths. Comparison with measurements on real objects illustrates the suitability of both methods.
Abstract: Air conditioning systems of houses consume large
quantity of electricity. To reducing energy consumption for air
conditioning purposes it is becoming attractive the use of evaporative
cooling air conditioning which is less energy consuming compared to
air chillers. But, it is obvious that higher energy efficiency of
evaporative cooling is not enough to judge whether evaporative
cooling economically is competitive with other types of cooling
systems. To proving the higher energy efficiency and cost
effectiveness of the evaporative cooling competitive analysis of
various types of cooling system should be accomplished. For noted
purpose optimization mathematical model for each system should be
composed based on system approach analysis. In this paper different
types of evaporative cooling-heating systems are discussed and
methods for increasing their energy efficiency and as well as
determining of their design parameters are developed. The
optimization mathematical models for each of them are composed
with help of which least specific costs for each of them are reviled.
The comparison of specific costs proved that the most efficient and
cost effective is considered the “direct evaporating" system if it is
applicable for given climatic conditions. Next more universal and
applicable for many climatic conditions system providing least cost
of heating and cooling is considered the “direct evaporating" system.
Abstract: Data mining can be called as a technique to extract
information from data. It is the process of obtaining hidden
information and then turning it into qualified knowledge by statistical
and artificial intelligence technique. One of its application areas is
medical area to form decision support systems for diagnosis just by
inventing meaningful information from given medical data. In this
study a decision support system for diagnosis of illness that make use
of data mining and three different artificial intelligence classifier
algorithms namely Multilayer Perceptron, Naive Bayes Classifier and
J.48. Pima Indian dataset of UCI Machine Learning Repository was
used. This dataset includes urinary and blood test results of 768
patients. These test results consist of 8 different feature vectors.
Obtained classifying results were compared with the previous studies.
The suggestions for future studies were presented.
Abstract: Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Abstract: The success of IT-projects concerning the
implementation of business application Software is strongly
depending upon the application of an efficient requirements
management, to understand the business requirements and to realize
them in the IT. But in fact, the Potentials of the requirements
management are not fully exhausted by small and medium sized
enterprises (SME) of the IT sector. To work out recommendations for
action and furthermore a possible solution, allowing a better exhaust
of potentials, it shall be examined in a scientific research project,
which problems occur out of which causes. In the same place, the
storage of knowledge from the requirements management, and its
later reuse are important, to achieve sustainable improvements of the
competitive of the IT-SMEs. Requirements Engineering is one of the
most important topics in Product Management for Software to
achieve the goal of optimizing the success of the software product.
Abstract: The gases generated in oil filled transformers can be
used for qualitative determination of incipient faults. The Dissolved
Gas Analysis has been widely used by utilities throughout the world
as the primarily diagnostic tool for transformer maintenance. In this
paper, various Artificial Intelligence Techniques that have been used
by the researchers in the past have been reviewed, some conclusions
have been drawn and a sequential hybrid system has been proposed.
The synergy of ANN and FIS can be a good solution for reliable
results for predicting faults because one should not rely on a single
technology when dealing with real–life applications.
Abstract: The aim of this research is to determine how preservice Turkish teachers perceive themselves in terms of problem solving skills. Students attending Department of Turkish Language Teaching of Gazi University Education Faculty in 2005-2006 academic year constitute the study group (n= 270) of this research in which survey model was utilized. Data were obtained by Problem Solving Inventory developed by Heppner & Peterson and Personal Information Form. Within the settings of this research, Cronbach Alpha reliability coefficient of the scale was found as .87. Besides, reliability coefficient obtained by split-half technique which splits odd and even numbered items of the scale was found as r=.81 (Split- Half Reliability). The findings of the research revealed that preservice Turkish teachers were sufficiently qualified on the subject of problem solving skills and statistical significance was found in favor of male candidates in terms of “gender" variable. According to the “grade" variable, statistical significance was found in favor of 4th graders.
Abstract: Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
presented.
Abstract: Trends in business intelligence, e-commerce and
remote access make it necessary and practical to store data in
different ways on multiple systems with different operating systems.
As business evolve and grow, they require efficient computerized
solution to perform data update and to access data from diverse
enterprise business applications. The objective of this paper is to
demonstrate the capability of DTS [1] as a database solution for
automatic data transfer and update in solving business problem. This
DTS package is developed for the sales of variety of plants and
eventually expanded into commercial supply and landscaping
business. Dimension data modeling is used in DTS package to
extract, transform and load data from heterogeneous database
systems such as MySQL, Microsoft Access and Oracle that
consolidates into a Data Mart residing in SQL Server. Hence, the
data transfer from various databases is scheduled to run automatically
every quarter of the year to review the efficient sales analysis.
Therefore, DTS is absolutely an attractive solution for automatic data
transfer and update which meeting today-s business needs.
Abstract: Fuzzy C-means Clustering algorithm (FCM) is a
method that is frequently used in pattern recognition. It has the
advantage of giving good modeling results in many cases, although,
it is not capable of specifying the number of clusters by itself. In
FCM algorithm most researchers fix weighting exponent (m) to a
conventional value of 2 which might not be the appropriate for all
applications. Consequently, the main objective of this paper is to use
the subtractive clustering algorithm to provide the optimal number of
clusters needed by FCM algorithm by optimizing the parameters of
the subtractive clustering algorithm by an iterative search approach
and then to find an optimal weighting exponent (m) for the FCM
algorithm. In order to get an optimal number of clusters, the iterative
search approach is used to find the optimal single-output Sugenotype
Fuzzy Inference System (FIS) model by optimizing the
parameters of the subtractive clustering algorithm that give minimum
least square error between the actual data and the Sugeno fuzzy
model. Once the number of clusters is optimized, then two
approaches are proposed to optimize the weighting exponent (m) in
the FCM algorithm, namely, the iterative search approach and the
genetic algorithms. The above mentioned approach is tested on the
generated data from the original function and optimal fuzzy models
are obtained with minimum error between the real data and the
obtained fuzzy models.
Abstract: Flow through micro and mini channels requires relatively
high driving pressure due to the large fluid pressure drop
through these channels. Consequently the forces acting on the walls of
the channel due to the fluid pressure are also large. Due to these forces
there are displacement fields set up in the solid substrate containing
the channels. If the movement of the substrate is constrained at some
points, then stress fields are established in the substrate. On the other
hand, if the deformation of the channel shape is sufficiently large
then its effect on the fluid flow is important to be calculated. Such
coupled fluid-solid systems form a class of problems known as fluidstructure
interactions. In the present work a co-located finite volume
discretization procedure on unstructured meshes is described for
solving fluid-structure interaction type of problems. A linear elastic
solid is assumed for which the effect of the channel deformation
on the flow is neglected. Thus the governing equations for the
fluid and the solid are decoupled and are solved separately. The
procedure is validated by solving two benchmark problems, one from
fluid mechanics and another from solid mechanics. A fluid-structure
interaction problem of flow through a U-shaped channel embedded
in a plate is solved.