Mining Educational Data to Analyze the Student Motivation Behavior

The purpose of this research aims to discover the knowledge for analysis student motivation behavior on e-Learning based on Data Mining Techniques, in case of the Information Technology for Communication and Learning Course at Suan Sunandha Rajabhat University. The data mining techniques was applied in this research including association rules, classification techniques. The results showed that using data mining technique can indicate the important variables that influence the student motivation behavior on e-Learning.

The Performance Analysis of CSS-based Communication Systems in the Jamming Environment

Due to its capability to resist jamming signals, chirp spread spectrum (CSS) technique has attracted much attention in the area of wireless communications. However, there has been little rigorous analysis for the performance of the CSS communication system in jamming environments. In this paper, we present analytic results on the performance of a CSS system by deriving symbol error rate (SER) expressions for a CSS M-ary phase shift keying (MPSK) system in the presence of broadband and tone jamming signals, respectively. The numerical results show that the empirical SER closely agrees with the analytic result.

Performance Analysis of Evolutionary ANN for Output Prediction of a Grid-Connected Photovoltaic System

This paper presents performance analysis of the Evolutionary Programming-Artificial Neural Network (EPANN) based technique to optimize the architecture and training parameters of a one-hidden layer feedforward ANN model for the prediction of energy output from a grid connected photovoltaic system. The ANN utilizes solar radiation and ambient temperature as its inputs while the output is the total watt-hour energy produced from the grid-connected PV system. EP is used to optimize the regression performance of the ANN model by determining the optimum values for the number of nodes in the hidden layer as well as the optimal momentum rate and learning rate for the training. The EPANN model is tested using two types of transfer function for the hidden layer, namely the tangent sigmoid and logarithmic sigmoid. The best transfer function, neural topology and learning parameters were selected based on the highest regression performance obtained during the ANN training and testing process. It is observed that the best transfer function configuration for the prediction model is [logarithmic sigmoid, purely linear].

Electronic Markets has Weakened the “Tradeoff between Reach and Richness“ in the Internet

This paper has two main ideas. Firstly, it describes Evans and Wurster-s concepts “the trade-off between reach and richness", and relates them to the impact of technology on the virtual markets. Authors Evans and Wurster see the transfer of information as a 'trade'off between richness and reach-. Reach refers to the number of people who share particular information, with Richness ['Rich'] being a more complex concept combining: bandwidth, customization, interactivity, reliability, security and currency. Traditional shopping limits the number of shops the shopper is able to visit due to time and other cost constraints; the time spent traveling consequently leaves the shopper with less time to evaluate the product. The paper concludes that although the Web provides Reach, offering Richness and the sense of community required for creating and sustaining relationships with potential clients could be difficult.

An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks

Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.

Feature-Based Machining using Macro

This paper presents an on-going research work on the implementation of feature-based machining via macro programming. Repetitive machining features such as holes, slots, pockets etc can readily be encapsulated in macros. Each macro consists of methods on how to machine the shape as defined by the feature. The macro programming technique comprises of a main program and subprograms. The main program allows user to select several subprograms that contain features and define their important parameters. With macros, complex machining routines can be implemented easily and no post processor is required. A case study on machining of a part that comprised of planar face, hole and pocket features using the macro programming technique was carried out. It is envisaged that the macro programming technique can be extended to other feature-based machining fields such as the newly developed STEP-NC domain.

K-Means for Spherical Clusters with Large Variance in Sizes

Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.

Bifurcation Analysis of Horizontal Platform System

Horizontal platform system (HPS) is popularly applied in offshore and earthquake technology, but it is difficult and time-consuming for regulation. In order to understand the nonlinear dynamic behavior of HPS and reduce the cost when using it, this paper employs differential transformation method to study the bifurcation behavior of HPS. The numerical results reveal a complex dynamic behavior comprising periodic, sub-harmonic, and chaotic responses. Furthermore, the results reveal the changes which take place in the dynamic behavior of the HPS as the external torque is increased. Therefore, the proposed method provides an effective means of gaining insights into the nonlinear dynamics of horizontal platform system.

Technology Readiness Index (TRI) among USM Distance Education Students According to Age

This paper reports the findings of a research conducted to evaluate the ownership and usage of technology devices within Distance Education students- according to their age. This research involved 45 Distance Education students from USM Universiti Sains Malaysia (DEUSM) as its respondents. Data was collected through questionnaire that had been developed by the researchers based on some literature review. The data was analyzed to find out the frequencies of respondents agreements towards ownership of technology devices and the use of technology devices. The findings shows that all respondents own mobile phone and majority of them reveal that they use mobile on regular basis. The student in the age 30-39 has the heist ownership of the technology devices.

Lunar Rover Virtual Simulation System with Autonomous Navigation

The paper researched and presented a virtual simulation system based on a full-digital lunar terrain, integrated with kinematics and dynamics module as well as autonomous navigation simulation module. The system simulation models are established. Enabling technologies such as digital lunar surface module, kinematics and dynamics simulation, Autonomous navigation are investigated. A prototype system for lunar rover locomotion simulation is developed based on these technologies. Autonomous navigation is a key echnology in lunar rover system, but rarely involved in virtual simulation system. An autonomous navigation simulation module have been integrated in this prototype system, which was proved by the simulation results that the synthetic simulation and visualizing analysis system are established in the system, and the system can provide efficient support for research on the autonomous navigation of lunar rover.

Soft Computing based Retrieval System for Medical Applications

With increasing data in medical databases, medical data retrieval is growing in popularity. Some of this analysis including inducing propositional rules from databases using many soft techniques, and then using these rules in an expert system. Diagnostic rules and information on features are extracted from clinical databases on diseases of congenital anomaly. This paper explain the latest soft computing techniques and some of the adaptive techniques encompasses an extensive group of methods that have been applied in the medical domain and that are used for the discovery of data dependencies, importance of features, patterns in sample data, and feature space dimensionality reduction. These approaches pave the way for new and interesting avenues of research in medical imaging and represent an important challenge for researchers.

Non-Invasive Technology on a Classroom Chair for Detection of Emotions Used for the Personalization of Learning Resources

Emotions are related with learning processes and physiological signals can be used to detect them for the personalization of learning resources and to control the pace of instruction. A model of relevant emotions has been developed, where specific combinations of emotions and cognition processes are connected and integrated with the concept of 'flow', in order to improve learning. The cardiac pulse is a reliable signal that carries useful information about the subject-s emotional condition; it is detected using a classroom chair adapted with non invasive EMFi sensor and an acquisition system that generates a ballistocardiogram (BCG), the signal is processed by an algorithm to obtain characteristics that match a specific emotional condition. The complete chair system is presented in this work, along with a framework for the personalization of learning resources.

On Asymptotic Laws and Transfer Processes Enhancement in Complex Turbulent Flows

The lecture represents significant advances in understanding of the transfer processes mechanism in turbulent separated flows. Based upon experimental data suggesting the governing role of generated local pressure gradient that takes place in the immediate vicinity of the wall in separated flow as a result of intense instantaneous accelerations induced by large-scale vortex flow structures similarity laws for mean velocity and temperature and spectral characteristics and heat and mass transfer law for turbulent separated flows have been developed. These laws are confirmed by available experimental data. The results obtained were employed for analysis of heat and mass transfer in some very complex processes occurring in technological applications such as impinging jets, heat transfer of cylinders in cross flow and in tube banks, packed beds where processes manifest distinct properties which allow them to be classified under turbulent separated flows. Many facts have got an explanation for the first time.

A Multiagent System for Distributed Systems Management

The demand for autonomous resource management for distributed systems has increased in recent years. Distributed systems require an efficient and powerful communication mechanism between applications running on different hosts and networks. The use of mobile agent technology to distribute and delegate management tasks promises to overcome the scalability and flexibility limitations of the currently used centralized management approach. This work proposes a multiagent system that adopts mobile agents as a technology for tasks distribution, results collection, and management of resources in large-scale distributed systems. A new mobile agent-based approach for collecting results from distributed system elements is presented. The technique of artificial intelligence based on intelligent agents giving the system a proactive behavior. The presented results are based on a design example of an application operating in a mobile environment.

The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making

Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient-s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine.

Decision Algorithm for Smart Airbag Deployment Safety Issues

Airbag deployment has been known to be responsible for huge death, incidental injuries and broken bones due to low crash severity and wrong deployment decisions. Therefore, the authorities and industries have been looking for more innovative and intelligent products to be realized for future enhancements in the vehicle safety systems (VSSs). Although the VSSs technologies have advanced considerably, they still face challenges such as how to avoid unnecessary and untimely airbag deployments that can be hazardous and fatal. Currently, most of the existing airbag systems deploy without regard to occupant size and position. As such, this paper will focus on the occupant and crash sensing performances due to frontal collisions for the new breed of so called smart airbag systems. It intends to provide a thorough discussion relating to the occupancy detection, occupant size classification, occupant off-position detection to determine safe distance zone for airbag deployment, crash-severity analysis and airbag decision algorithms via a computer modeling. The proposed system model consists of three main modules namely, occupant sensing, crash severity analysis and decision fusion. The occupant sensing system module utilizes the weight sensor to determine occupancy, classify the occupant size, and determine occupant off-position condition to compute safe distance for airbag deployment. The crash severity analysis module is used to generate relevant information pertinent to airbag deployment decision. Outputs from these two modules are fused to the decision module for correct and efficient airbag deployment action. Computer modeling work is carried out using Simulink, Stateflow, SimMechanics and Virtual Reality toolboxes.

Piecewise Interpolation Filter for Effective Processing of Large Signal Sets

Suppose KY and KX are large sets of observed and reference signals, respectively, each containing N signals. Is it possible to construct a filter F : KY → KX that requires a priori information only on few signals, p  N, from KX but performs better than the known filters based on a priori information on every reference signal from KX? It is shown that the positive answer is achievable under quite unrestrictive assumptions. The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of random signal sets. The proposed technique provides a single filter to process any signal from the arbitrarily large signal set. The filter is determined in terms of pseudo-inverse matrices so that it always exists.

The Service Failure and Recovery in the Information Technology Services

It is important to retain customer satisfaction in information technology services. When a service failure occurs, companies need to take service recovery action to recover their customer satisfaction. Although companies cannot avoid all problems and complaints, they should try to make up. Therefore, service failure and service recovery have become an important and challenging issue for companies. In this paper, the literature and the problems in the information technology services were reviewed. An integrated model of profit driven for the service failure and service recovery was established in view of the benefit of customer and enterprise. Moreover, the interaction between service failure and service recovery strategy was studied, the result of which verified the matching principles of the service recovery strategy and the type of service failure. In addition, the relationship between the cost of service recovery and customer-s cumulative value of service after recovery was analyzed with the model. The result attributes to managers in deciding on appropriate resource allocations for recovery strategies.

Solar Thermal Aquaculture System Controller Based on Artificial Neural Network

Temperature is one of the most principle factors affects aquaculture system. It can cause stress and mortality or superior environment for growth and reproduction. This paper presents the control of pond water temperature using artificial intelligence technique. The water temperature is very important parameter for shrimp growth. The required temperature for optimal growth is 34oC, if temperature increase up to 38oC it cause death of the shrimp, so it is important to control water temperature. Solar thermal water heating system is designed to supply an aquaculture pond with the required hot water in Mersa Matruh in Egypt. Neural networks are massively parallel processors that have the ability to learn patterns through a training experience. Because of this feature, they are often well suited for modeling complex and non-linear processes such as those commonly found in the heating system. Artificial neural network is proposed to control water temperature due to Artificial intelligence (AI) techniques are becoming useful as alternate approaches to conventional techniques. They have been used to solve complicated practical problems. Moreover this paper introduces a complete mathematical modeling and MATLAB SIMULINK model for the aquaculture system. The simulation results indicate that, the control unit success in keeping water temperature constant at the desired temperature by controlling the hot water flow rate.

Model of High-Speed Train Energy Consumption

In the hardening energy context, the transport sector which constitutes a large worldwide energy demand has to be improving for decrease energy demand and global warming impacts. In a controversial situation where subsists an increasing demand for long-distance and high-speed travels, high-speed trains offer many advantages, as consuming significantly less energy than road or air transports. At the project phase of new rail infrastructures, it is nowadays important to characterize accurately the energy that will be induced by its operation phase, in addition to other more classical criteria as construction costs and travel time. Current literature consumption models used to estimate railways operation phase are obsolete or not enough accurate for taking into account the newest train or railways technologies. In this paper, an updated model of consumption for high-speed is proposed, based on experimental data obtained from full-scale tests performed on a new high-speed line. The assessment of the model is achieved by identifying train parameters and measured power consumptions for more than one hundred train routes. Perspectives are then discussed to use this updated model for accurately assess the energy impact of future railway infrastructures.