Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.

Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing

The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.

Congestion Control for Internet Media Traffic

In this paper we investigated a number of the Internet congestion control algorithms that has been developed in the last few years. It was obviously found that many of these algorithms were designed to deal with the Internet traffic merely as a train of consequent packets. Other few algorithms were specifically tailored to handle the Internet congestion caused by running media traffic that represents audiovisual content. This later set of algorithms is considered to be aware of the nature of this media content. In this context we briefly explained a number of congestion control algorithms and hence categorized them into the two following categories: i) Media congestion control algorithms. ii) Common congestion control algorithms. We hereby recommend the usage of the media congestion control algorithms for the reason of being media content-aware rather than the other common type of algorithms that blindly manipulates such traffic. We showed that the spread of such media content-aware algorithms over Internet will lead to better congestion control status in the coming years. This is due to the observed emergence of the era of digital convergence where the media traffic type will form the majority of the Internet traffic.

Region-Based Image Fusion with Artificial Neural Network

For most image fusion algorithms separate relationship by pixels in the image and treat them more or less independently. In addition, they have to be adjusted different parameters in different time or weather. In this paper, we propose a region–based image fusion which combines aspects of feature and pixel-level fusion method to replace only by pixel. The basic idea is to segment far infrared image only and to add information of each region from segmented image to visual image respectively. Then we determine different fused parameters according different region. At last, we adopt artificial neural network to deal with the problems of different time or weather, because the relationship between fused parameters and image features are nonlinear. It render the fused parameters can be produce automatically according different states. The experimental results present the method we proposed indeed have good adaptive capacity with automatic determined fused parameters. And the architecture can be used for lots of applications.

Real-Time Visual Simulation and Interactive Animation of Shadow Play Puppets Using OpenGL

This paper describes a method of modeling to model shadow play puppet using sophisticated computer graphics techniques available in OpenGL in order to allow interactive play in real-time environment as well as producing realistic animation. This paper proposes a novel real-time method is proposed for modeling of puppet and its shadow image that allows interactive play of virtual shadow play using texture mapping and blending techniques. Special effects such as lighting and blurring effects for virtual shadow play environment are also developed. Moreover, the use of geometric transformations and hierarchical modeling facilitates interaction among the different parts of the puppet during animation. Based on the experiments and the survey that were carried out, the respondents involved are very satisfied with the outcomes of these techniques.

Metaphorical Perceptions of Middle School Students Regarding Computer Games

The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.

Development of A Jacobean Model for A 4-Axes Indigenously Developed SCARA System

This paper deals with the development of a Jacobean model for a 4-axes indigenously developed scara robot arm in the laboratory. This model is used to study the relation between the velocities and the forces in the robot while it is doing the pick and place operation.

Geometry Design Supported by Minimizing and Visualizing Collision in Dynamic Packing

This paper presents a method to support dynamic packing in cases when no collision-free path can be found. The method, which is primarily based on path planning and shrinking of geometries, suggests a minimal geometry design change that results in a collision-free assembly path. A supplementing approach to optimize geometry design change with respect to redesign cost is described. Supporting this dynamic packing method, a new method to shrink geometry based on vertex translation, interweaved with retriangulation, is suggested. The shrinking method requires neither tetrahedralization nor calculation of medial axis and it preserves the topology of the geometry, i.e. holes are neither lost nor introduced. The proposed methods are successfully applied on industrial geometries.

Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Potential of Energy Conservation of Daylight Linked Lighting System in India

Demand of energy is increasing faster than the generation. It leads shortage of power in all sectors of society. At peak hours this shortage is higher. Unless we utilize energy efficient technology, it is very difficult to minimize the shortage of energy. So energy efficiency program and energy conservation has an important role. Energy efficient technologies are cost intensive hence it is always not possible to implement in country like India. In the recent study, an educational building with operating hours from 10:00 a.m. to 05:00 p.m. has been selected to quantify the possibility of lighting energy conservation. As the operating hour is in daytime, integration of daylight with artificial lighting system will definitely reduce the lighting energy consumption. Moreover the initial investment has been given priority and hence the existing lighting installation was unaltered. An automatic controller has been designed which will be operated as a function of daylight through windows and the lighting system of the room will function accordingly. The result of the study of integrating daylight gave quite satisfactory for visual comfort as well as energy conservation.

Mirror Neuron System Study on Elderly Using Dynamic Causal Modeling fMRI Analysis

Dynamic Causal Modeling (DCM) functional Magnetic Resonance Imaging (fMRI) is a promising technique to study the connectivity among brain regions and effects of stimuli through modeling neuronal interactions from time-series neuroimaging. The aim of this study is to study characteristics of a mirror neuron system (MNS) in elderly group (age: 60-70 years old). Twenty volunteers were MRI scanned with visual stimuli to study a functional brain network. DCM was employed to determine the mechanism of mirror neuron effects. The results revealed major activated areas including precentral gyrus, inferior parietal lobule, inferior occipital gyrus, and supplementary motor area. When visual stimuli were presented, the feed-forward connectivity from visual area to conjunction area was increased and forwarded to motor area. Moreover, the connectivity from the conjunction areas to premotor area was also increased. Such findings can be useful for future diagnostic process for elderly with diseases such as Parkinson-s and Alzheimer-s.

The Effects of Processing and Preservation on the Sensory Qualities of Prickly Pear Juice

Prickly pear juice has received renewed attention with regard to the effects of processing and preservation on its sensory qualities (colour, taste, flavour, aroma, astringency, visual browning and overall acceptability). Juice was prepared by homogenizing fruit and treating the pulp with pectinase (Aspergillus niger). Juice treatments applied were sugar addition, acidification, heat-treatment, refrigeration, and freezing and thawing. Prickly pear pulp and juice had unique properties (low pH 3.88, soluble solids 3.68 oBrix and high titratable acidity 0.47). Sensory profiling and descriptive analyses revealed that non-treated juice had a bitter taste with high astringency whereas treated prickly pear was significantly sweeter. All treated juices had a good sensory acceptance with values approximating or exceeding 7. Regression analysis of the consumer sensory attributes for non-treated prickly pear juice indicated an overwhelming rejection, while treated prickly pear juice received overall acceptability. Thus, educed favourable sensory responses and may have positive implications for consumer acceptability.

A Visual Educational Modeling Language to Help Teachers in Learning Scenario Design

The success of an e-learning system is highly dependent on the quality of its educational content and how effective, complete, and simple the design tool can be for teachers. Educational modeling languages (EMLs) are proposed as design languages intended to teachers for modeling diverse teaching-learning experiences, independently of the pedagogical approach and in different contexts. However, most existing EMLs are criticized for being too abstract and too complex to be understood and manipulated by teachers. In this paper, we present a visual EML that simplifies the process of designing learning scenarios for teachers with no programming background. Based on the conceptual framework of the activity theory, our resulting visual EML focuses on using Domainspecific modeling techniques to provide a pedagogical level of abstraction in the design process.

Integrating Hedgerow into Town Planning: A Framework for Sustainable Residential Development

The vast rural landscape in the southern United States is conspicuously characterized by the hedgerow trees or groves. The patchwork landscape of fields surrounded by high hedgerows is a traditional and familiar feature of the American countryside. Hedgerows are in effect linear strips of trees, groves, or woodlands, which are often critical habitats for wildlife and important for the visual quality of the landscape. As landscape interfaces, hedgerows define the spaces in the landscape, give the landscape life and meaning, and enrich ecologies and cultural heritages of the American countryside. Although hedgerows were originally intended as fences and to mark property and townland boundaries, they are not merely the natural or man-made additions to the landscape--they have gradually become “naturalized" into the landscape, deeply rooted in the rural culture, and now formed an important component of the southern American rural environment. However, due to the ever expanding real estate industry and high demand for new residential development, substantial areas of authentic hedgerow landscape in the southern United States are being urbanized. Using Hudson Farm as an example, this study illustrated guidelines of how hedgerows can be integrated into town planning as green infrastructure and landscape interface to innovate and direct sustainable land use, and suggest ways in which such vernacular landscapes can be preserved and integrated into new development without losing their contextual inspiration.

Evolution of Quality Function Deployment (QFD) via Fuzzy Concepts and Neural Networks

Quality Function Deployment (QFD) is an expounded, multi-step planning method for delivering commodity, services, and processes to customers, both external and internal to an organization. It is a way to convert between the diverse customer languages expressing demands (Voice of the Customer), and the organization-s languages expressing results that sate those demands. The policy is to establish one or more matrices that inter-relate producer and consumer reciprocal expectations. Due to its visual presence is called the “House of Quality" (HOQ). In this paper, we assumed HOQ in multi attribute decision making (MADM) pattern and through a proposed MADM method, rank technical specifications. Thereafter compute satisfaction degree of customer requirements and for it, we apply vagueness and uncertainty conditions in decision making by fuzzy set theory. This approach would propound supervised neural network (perceptron) for MADM problem solving.

Learning Style and Learner Satisfaction in a Course Delivery Context

This paper describes the results and implications of a correlational study of learning styles and learner satisfaction. The relationship of these empirical concepts was examined in the context of traditional versus e-blended modes of course delivery in an introductory graduate research course. Significant results indicated that the visual side of the visual-verbal dimension of students- learning style(s) was positively correlated to satisfaction with themselves as learners in an e-blended course delivery mode and negatively correlated to satisfaction with the classroom environment in the context of a traditional classroom course delivery mode.

Selective Encryption using ISMA Cryp in Real Time Video Streaming of H.264/AVC for DVB-H Application

Multimedia information availability has increased dramatically with the advent of video broadcasting on handheld devices. But with this availability comes problems of maintaining the security of information that is displayed in public. ISMA Encryption and Authentication (ISMACryp) is one of the chosen technologies for service protection in DVB-H (Digital Video Broadcasting- Handheld), the TV system for portable handheld devices. The ISMACryp is encoded with H.264/AVC (advanced video coding), while leaving all structural data as it is. Two modes of ISMACryp are available; the CTR mode (Counter type) and CBC mode (Cipher Block Chaining) mode. Both modes of ISMACryp are based on 128- bit AES algorithm. AES algorithms are more complex and require larger time for execution which is not suitable for real time application like live TV. The proposed system aims to gain a deep understanding of video data security on multimedia technologies and to provide security for real time video applications using selective encryption for H.264/AVC. Five level of security proposed in this paper based on the content of NAL unit in Baseline Constrain profile of H.264/AVC. The selective encryption in different levels provides encryption of intra-prediction mode, residue data, inter-prediction mode or motion vectors only. Experimental results shown in this paper described that fifth level which is ISMACryp provide higher level of security with more encryption time and the one level provide lower level of security by encrypting only motion vectors with lower execution time without compromise on compression and quality of visual content. This encryption scheme with compression process with low cost, and keeps the file format unchanged with some direct operations supported. Simulation was being carried out in Matlab.

Dynamic Capitalization and Visualization Strategy in Collaborative Knowledge Management System for EI Process

Knowledge is attributed to human whose problemsolving behavior is subjective and complex. In today-s knowledge economy, the need to manage knowledge produced by a community of actors cannot be overemphasized. This is due to the fact that actors possess some level of tacit knowledge which is generally difficult to articulate. Problem-solving requires searching and sharing of knowledge among a group of actors in a particular context. Knowledge expressed within the context of a problem resolution must be capitalized for future reuse. In this paper, an approach that permits dynamic capitalization of relevant and reliable actors- knowledge in solving decision problem following Economic Intelligence process is proposed. Knowledge annotation method and temporal attributes are used for handling the complexity in the communication among actors and in contextualizing expressed knowledge. A prototype is built to demonstrate the functionalities of a collaborative Knowledge Management system based on this approach. It is tested with sample cases and the result showed that dynamic capitalization leads to knowledge validation hence increasing reliability of captured knowledge for reuse. The system can be adapted to various domains.

A Case of Study for 3D Stereoscopic Conversion in Visual Effects Industry

This paper covered a series of key points in terms of 2D to 3D stereoscopic conversion. A successfully applied stereoscopic conversion approach in current visual effects industry was presented. The purpose of this paper is to cover a detailed workflow and concept, which has been successfully used in 3D stereoscopic conversion for feature films in visual effects industry, and therefore to clarify the process in stereoscopic conversion production and provide a clear idea for those entry-level artists to improve an overall understanding of 3D stereoscopic in digital compositing field as well as to the higher education factor of visual effects and hopefully inspire further collaboration and participants particularly between academia and industry.

Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.