Analysis and Categorization of e-Learning Activities Based On Meaningful Learning Characteristics

Learning is the acquisition of new mental schemata, knowledge, abilities and skills which can be used to solve problems potentially more successfully. The learning process is optimum when it is assisted and personalized. Learning is not a single activity, but should involve many possible activities to make learning become meaningful. Many e-learning applications provide facilities to support teaching and learning activities. One way to identify whether the e-learning system is being used by the learners is through the number of hits that can be obtained from the e-learning system's log data. However, we cannot rely solely to the number of hits in order to determine whether learning had occurred meaningfully. This is due to the fact that meaningful learning should engage five characteristics namely active, constructive, intentional, authentic and cooperative. This paper aims to analyze the e-learning activities that is meaningful to learning. By focusing on the meaningful learning characteristics, we match it to the corresponding Moodle e-learning activities. This analysis discovers the activities that have high impact to meaningful learning, as well as activities that are less meaningful. The high impact activities is given high weights since it become important to meaningful learning, while the low impact has less weight and said to be supportive e-learning activities. The result of this analysis helps us categorize which e-learning activities that are meaningful to learning and guide us to measure the effectiveness of e-learning usage.

Experimental Modal Analysis and Model Validation of Antenna Structures

Numerical design optimization is a powerful tool that can be used by engineers during any stage of the design process. There are many different applications for structural optimization. A specific application that will be discussed in the following paper is experimental data matching. Data obtained through tests on a physical structure will be matched with data from a numerical model of that same structure. The data of interest will be the dynamic characteristics of an antenna structure focusing on the mode shapes and modal frequencies. The structure used was a scaled and simplified model of the Karoo Array Telescope-7 (KAT-7) antenna structure. This kind of data matching is a complex and difficult task. This paper discusses how optimization can assist an engineer during the process of correlating a finite element model with vibration test data.

Re-Design of Load Shedding Schemes of the Kosovo Power System

This paper discusses aspects of re-design of loadshedding schemes with respect to actual developments in the Kosovo power system. Load-shedding is a type of emergency control that is designed to ensure system stability by reducing power system load to match the power generation supply. This paper presents a new adaptive load-shedding scheme that provides emergency protection against excess frequency decline, in cases when the Kosovo power system might be disconnected from the regional transmission network. The proposed load-shedding scheme uses the local frequency rate information to adapt the load-shedding pattern to suit the size and location of the occurring disturbance. The proposed scheme is tested in a software simulation on a large scale PSS/E model which represents nine power system areas of Southeast Europe including the Kosovo power system.

Dynamic Economic Dispatch Constrained by Wind Power Weibull Distribution: A Here-and-Now Strategy

In this paper, a Dynamic Economic Dispatch (DED) model is developed for the system consisting of both thermal generators and wind turbines. The inclusion of a significant amount of wind energy into power systems has resulted in additional constraints on DED to accommodate the intermittent nature of the output. The probability of stochastic wind power based on the Weibull probability density function is included in the model as a constraint; A Here-and-Now Approach. The Environmental Protection Agency-s hourly emission target, which gives the maximum emission during the day, is used as a constraint to reduce the atmospheric pollution. A 69-bus test system with non-smooth cost function is used to illustrate the effectiveness of the proposed model compared with static economic dispatch model with including the wind power.

Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

An Energy Integration Approach on UHDE Ammonia Process

In this paper, the energy performance of a selected UHDE Ammonia plant is optimized by conducting heat integration through waste heat recovery and the synthesis of a heat exchange network (HEN). Minimum hot and cold utility requirements were estimated through IChemE spreadsheet. Supporting simulation was carried out using HYSYS software. The results showed that there is no need for heating utility while the required cold utility was found to be around 268,714 kW. Hence a threshold pinch case was faced. Then, the hot and cold streams were matched appropriately. Also, waste heat recovered resulted with savings in HP and LP steams of approximately 51.0% and 99.6%, respectively. An economic analysis on proposed HEN showed very attractive overall payback period not exceeding 3 years. In general, a net saving approaching 35% was achieved in implementing heat optimization of current studied UHDE Ammonia process.

Physical, Textural and Sensory Properties of Noodles Supplemented with Tilapia Bone Flour (Tilapia nilotica)

Fishbone of Nile Tilapia (Tilapia nilotica), waste from the frozen Nile Tilapia fillet factory, is one of calcium sources. In order to increase fish bone powder value, this study aimed to investigate the effect of Tilapia bone flour (TBF) addition (5, 10, 15% by flour weight) on cooking quality, texture and sensory attributes of noodles. The results indicated that tensile strength, color value (a*) and water absorption of noodles significantly decreased (p£0.05) as the levels of TBF increased from 0-15%. While cooking loss, cooking time and color values (L* and b*) of noodles significantly increased (p£0.05). Sensory evaluation indicated that noodles with 5% TBF received the highest overall acceptability score.

Investigation of Tearing in Hydroforming Process with Analytical Equations and Finite Element Method

Today, Hydroforming technology provides an attractive alternative to conventional matched die forming, especially for cost-sensitive, lower volume production, and for parts with irregular contours. In this study the critical fluid pressures which lead to rupture in the workpiece has been investigated by theoretical and finite element methods. The axisymmetric analysis was developed to investigate the tearing phenomenon in cylindrical Hydroforming Deep Drawing (HDD). By use of obtained equations the effect of anisotropy, drawing ratio, sheet thickness and strain hardening exponent on tearing diagram were investigated.

Use of Time-Depend Effects for Mixing and Separation of the Two-Phase Flows

The paper shows some ability to manage two-phase flows arising from the use of unsteady effects. In one case, we consider the condition of fragmentation of the interface between the two components leads to the intensification of mixing. The problem is solved when the temporal and linear scale are small for the appearance of the developed mixing layer. Showing that exist such conditions for unsteady flow velocity at the surface of the channel, which will lead to the creation and fragmentation of vortices at Re numbers of order unity. Also showing that the Re is not a criterion of similarity for this type of flows, but we can introduce a criterion that depends on both the Re, and the frequency splitting of the vortices. It turned out that feature of this situation is that streamlines behave stable, and if we analyze the behavior of the interface between the components it satisfies all the properties of unstable flows. The other problem we consider the behavior of solid impurities in the extensive system of channels. Simulated unsteady periodic flow modeled breaths. Consider the behavior of the particles along the trajectories. It is shown that, depending on the mass and diameter of the particles, they can be collected in a caustic on the channel walls, stop in a certain place or fly back. Of interest is the distribution of particle velocity in frequency. It turned out that by choosing a behavior of the velocity field of the carrier gas can affect the trajectory of individual particles including force them to fly back.

An Investigation into the Effect of Water Quality on Flotation Performance

A study was carried out to determine the effect of water quality on flotation performance. The experimental test work comprised of batch flotation tests using Denver lab cell for a period of 10 minutes. Nine different test runs were carried out in triplicates to ensure reproducibility using different water types from different thickener overflows, return and sewage effluent water (process water) and portable water. The water sources differed in pH, total dissolved solids, total suspended solids and conductivity. Process water was found to reduce the concentrate recovery and mass pull, while portable water increased the concentrate recovery and mass pull. Portable water reduced the concentrate grade while process water increased the concentrate grade. It is proposed that a combination of process water and portable water supply be used in flotation circuits to balance the different effects that the different water types have on the flotation efficiency.

The Self-Propelled Model of a Boat, Based on the Wave Thrust

We attempted investigate a boat model, based on the conversion of energy of surface wave into a sequence of unidirectional pulses of jet spurts, in other words - model of the boat, which is thrusting by the waves field on water surface. These pulses are forming some average reactive stream from the output nozzle on the stern of boat. The suggested model provides the conversion of its oscillatory motions (both pitching and rolling) into a jet flow. This becomes possible due to special construction of the boat and due to several details, sensitive to the local wave field. The boat model presents the uniflow jet engine without slow conversions of mechanical energy into intermediate forms and without any external sources of energy (besides surface waves). Motion of boat is characterized by fast jerks and average onward velocity, which exceeds the velocities of liquid particles in the wave.

Synthesis of Peptide Amides using Sol-Gel Immobilized Alcalase in Batch and Continuous Reaction System

Two commercial proteases from Bacillus licheniformis (Alcalase 2.4 L FG and Alcalase 2.5 L, Type DX) were screened for the production of Z-Ala-Phe-NH2 in batch reaction. Alcalase 2.4 L FG was the most efficient enzyme for the C-terminal amidation of Z-Ala-Phe-OMe using ammonium carbamate as ammonium source. Immobilization of protease has been achieved by the sol-gel method, using dimethyldimethoxysilane (DMDMOS) and tetramethoxysilane (TMOS) as precursors (unpublished results). In batch production, about 95% of Z-Ala-Phe-NH2 was obtained at 30°C after 24 hours of incubation. Reproducibility of different batches of commercial Alcalase 2.4 L FG preparations was also investigated by evaluating the amidation activity and the entrapment yields in the case of immobilization. A packed-bed reactor (0.68 cm ID, 15.0 cm long) was operated successfully for the continuous synthesis of peptide amides. The immobilized enzyme retained the initial activity over 10 cycles of repeated use in continuous reactor at ambient temperature. At 0.75 mL/min flow rate of the substrate mixture, the total conversion of Z-Ala-Phe-OMe was achieved after 5 hours of substrate recycling. The product contained about 90% peptide amide and 10% hydrolysis byproduct.

On Pattern-Based Programming towards the Discovery of Frequent Patterns

The problem of frequent pattern discovery is defined as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns has become an important data mining task because it reveals associations, correlations, and many other interesting relationships hidden in a database. Most of the proposed frequent pattern mining algorithms have been implemented with imperative programming languages. Such paradigm is inefficient when set of patterns is large and the frequent pattern is long. We suggest a high-level declarative style of programming apply to the problem of frequent pattern discovery. We consider two languages: Haskell and Prolog. Our intuitive idea is that the problem of finding frequent patterns should be efficiently and concisely implemented via a declarative paradigm since pattern matching is a fundamental feature supported by most functional languages and Prolog. Our frequent pattern mining implementation using the Haskell and Prolog languages confirms our hypothesis about conciseness of the program. The comparative performance studies on line-of-code, speed and memory usage of declarative versus imperative programming have been reported in the paper.

Protocol and Method for Preventing Attacks from the Web

Nowadays, computer worms, viruses and Trojan horse become popular, and they are collectively called malware. Those malware just spoiled computers by deleting or rewriting important files a decade ago. However, recent malware seems to be born to earn money. Some of malware work for collecting personal information so that malicious people can find secret information such as password for online banking, evidence for a scandal or contact address which relates with the target. Moreover, relation between money and malware becomes more complex. Many kinds of malware bear bots to get springboards. Meanwhile, for ordinary internet users, countermeasures against malware come up against a blank wall. Pattern matching becomes too much waste of computer resources, since matching tools have to deal with a lot of patterns derived from subspecies. Virus making tools can automatically bear subspecies of malware. Moreover, metamorphic and polymorphic malware are no longer special. Recently there appears malware checking sites that check contents in place of users' PC. However, there appears a new type of malicious sites that avoids check by malware checking sites. In this paper, existing protocols and methods related with the web are reconsidered in terms of protection from current attacks, and new protocol and method are indicated for the purpose of security of the web.

Parallelization of Ensemble Kalman Filter (EnKF) for Oil Reservoirs with Time-lapse Seismic Data

In this paper we describe the design and implementation of a parallel algorithm for data assimilation with ensemble Kalman filter (EnKF) for oil reservoir history matching problem. The use of large number of observations from time-lapse seismic leads to a large turnaround time for the analysis step, in addition to the time consuming simulations of the realizations. For efficient parallelization it is important to consider parallel computation at the analysis step. Our experiments show that parallelization of the analysis step in addition to the forecast step has good scalability, exploiting the same set of resources with some additional efforts.

A Taguchi Approach to Investigate Impact of Factors for Reusability of Software Components

Quantitative Investigation of impact of the factors' contribution towards measuring the reusability of software components could be helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable component from existing legacy systems; that can save cost of developing the software from scratch. But the issue of the relative significance of contributing factors has remained relatively unexplored. In this paper, we have use the Taguchi's approach in analyzing the significance of different structural attributes or factors in deciding the reusability level of a particular component. The results obtained shows that the complexity is the most important factor in deciding the better Reusability of a function oriented Software. In case of Object Oriented Software, Coupling and Complexity collectively play significant role in high reusability.

Sophorolipids Production by Candida Bombicola using Synthetic Dairy Wastewater

Sophorolipids (SLs) production by the yeast Candida bombicola was studied in batch shake flasks using synthetic dairy wastewaters (SDWW) with or without any added external carbon and nitrogen sources. A maximum SLs production of 38.76 g/l was observed with the SDWW supplemented with low cost substrate of sugarcane molasses at 50 g/l and soybean oil at 50 g/l. When the SDWW was supplemented with more costly glucose, yeast extract, urea and soybean oil, the production, however, got lowered to only 29.49 g/l, but with a maximum biomass production of 17.38 g/l together with a complete utilization of the carbon sources.

An Efficient Biometric Cryptosystem using Autocorrelators

Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.

Analysis on Iranian Wind Catcher and Its Effect on Natural Ventilation as a Solution towards Sustainable Architecture(Case Study: Yazd)

wind catchers have been served as a cooling system, used to provide acceptable ventilation by means of renewable energy of wind. In the present study, the city of Yazd in arid climate is selected as case study. From the architecture point of view, learning about wind catchers in this study is done by means of field surveys. Research method for selection of the case is based on random form, and analytical method. Wind catcher typology and knowledge of relationship governing the wind catcher's architecture were those measures that are taken for the first time. 53 wind catchers were analyzed. The typology of the wind-catchers is done by the physical analyzing, patterns and common concepts as incorporated in them. How the architecture of wind catcher can influence their operations by analyzing thermal behavior are the archetypes of selected wind catchers. Calculating fluids dynamics science, fluent software and numerical analysis are used in this study as the most accurate analytical approach. The results obtained from these analyses show the formal specifications of wind catchers with optimum operation in Yazd. The knowledge obtained from the optimum model could be used for design and construction of wind catchers with more improved operation

Hexavalent Chromium Removal from Aqueous Solutions by Adsorption onto Synthetic Nano Size ZeroValent Iron (nZVI)

The present work was conducted for the synthesis of nano size zerovalent iron (nZVI) and hexavalent chromium (Cr(VI)) removal as a highly toxic pollutant by using this nanoparticles. Batch experiments were performed to investigate the effects of Cr(VI), nZVI concentration, pH of solution and contact time variation on the removal efficiency of Cr(VI). nZVI was synthesized by reduction of ferric chloride using sodium borohydrid. SEM and XRD examinations applied for determination of particle size and characterization of produced nanoparticles. The results showed that the removal efficiency decreased with Cr(VI) concentration and pH of solution and increased with adsorbent dosage and contact time. The Langmuir and Freundlich isotherm models were used for the adsorption equilibrium data and the Langmuir isotherm model was well fitted. Nanoparticle ZVI presented an outstanding ability to remove Cr(VI) due to high surface area, low particle size and high inherent activity.