Toward an Efficient Framework for Designing, Developing, and Using Secure Mobile Applications

Nowadays, people are going more and more mobile, both in terms of devices and associated applications. Moreover, services that these devices are offering are getting wider and much more complex. Even though actual handheld devices have considerable computing power, their contexts of utilization are different. These contexts are affected by the availability of connection, high latency of wireless networks, battery life, size of the screen, on-screen or hard keyboard, etc. Consequently, development of mobile applications and their associated mobile Web services, if any, should follow a concise methodology so they will provide a high Quality of Service. The aim of this paper is to highlight and discuss main issues to consider when developing mobile applications and mobile Web services and then propose a framework that leads developers through different steps and modules toward development of efficient and secure mobile applications. First, different challenges in developing such applications are elicited and deeply discussed. Second, a development framework is presented with different modules addressing each of these challenges. Third, the paper presents an example of a mobile application, Eivom Cinema Guide, which benefits from following our development framework.

Transformation of Vocal Characteristics: A Review of Literature

The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future.

The Risk Assessment of Nano-particles and Investigation of Their Environmental Impact

Nanotechnology is the science of creating, using and manipulating objects which have at least one dimension in range of 0.1 to 100 nanometers. In other words, nanotechnology is reconstructing a substance using its individual atoms and arranging them in a way that is desirable for our purpose. The main reason that nanotechnology has been attracting attentions is the unique properties that objects show when they are formed at nano-scale. These differing characteristics that nano-scale materials show compared to their nature-existing form is both useful in creating high quality products and dangerous when being in contact with body or spread in environment. In order to control and lower the risk of such nano-scale particles, the main following three topics should be considered: 1) First of all, these materials would cause long term diseases that may show their effects on body years after being penetrated in human organs and since this science has become recently developed in industrial scale not enough information is available about their hazards on body. 2) The second is that these particles can easily spread out in environment and remain in air, soil or water for very long time, besides their high ability to penetrate body skin and causing new kinds of diseases. 3) The third one is that to protect body and environment against the danger of these particles, the protective barriers must be finer than these small objects and such defenses are hard to accomplish. This paper will review, discuss and assess the risks that human and environment face as this new science develops at a high rate.

A Hyper-Domain Image Watermarking Method based on Macro Edge Block and Wavelet Transform for Digital Signal Processor

In order to protect original data, watermarking is first consideration direction for digital information copyright. In addition, to achieve high quality image, the algorithm maybe can not run on embedded system because the computation is very complexity. However, almost nowadays algorithms need to build on consumer production because integrator circuit has a huge progress and cheap price. In this paper, we propose a novel algorithm which efficient inserts watermarking on digital image and very easy to implement on digital signal processor. In further, we select a general and cheap digital signal processor which is made by analog device company to fit consumer application. The experimental results show that the image quality by watermarking insertion can achieve 46 dB can be accepted in human vision and can real-time execute on digital signal processor.

High Capacity Data Hiding based on Predictor and Histogram Modification

In this paper, we propose a high capacity image hiding technology based on pixel prediction and the difference of modified histogram. This approach is used the pixel prediction and the difference of modified histogram to calculate the best embedding point. This approach can improve the predictive accuracy and increase the pixel difference to advance the hiding capacity. We also use the histogram modification to prevent the overflow and underflow. Experimental results demonstrate that our proposed method within the same average hiding capacity can still keep high quality of image and low distortion

Reversible Watermarking on Stereo Image Sequences

In this paper, a new reversible watermarking method is presented that reduces the size of a stereoscopic image sequence while keeping its content visible. The proposed technique embeds the residuals of the right frames to the corresponding frames of the left sequence, halving the total capacity. The residual frames may result in after a disparity compensated procedure between the two video streams or by a joint motion and disparity compensation. The residuals are usually lossy compressed before embedding because of the limited embedding capacity of the left frames. The watermarked frames are visible at a high quality and at any instant the stereoscopic video may be recovered by an inverse process. In fact, the left frames may be exactly recovered whereas the right ones are slightly distorted as the residuals are not embedded intact. The employed embedding method reorders the left frame into an array of consecutive pixel pairs and embeds a number of bits according to their intensity difference. In this way, it hides a number of bits in intensity smooth areas and most of the data in textured areas where resulting distortions are less visible. The experimental evaluation demonstrates that the proposed scheme is quite effective.

Vocal Communication in Sooty-headed Bulbul; Pycnonotus aurigaster

Studies of vocal communication in Sooty-headed Bulbul were carried out from January to December 2011. Vocal recordings and behavioral observations were made in their natural habitats at some localities of Lampang, Thailand. After editing, cuts of high quality recordings were analyzed with the help of Avisoft- SASLab Pro (version 4.40) software. More than one thousand element repertoires in five groups were found within two vocal structures. The two structures were short sounds with single element and phrases composed of elements, the frequency ranged from 1-10 kHz. Most phrases were composed of 2 to 5 elements that were often dissimilar in structure, however, these phrases were not as complex as song phrases. The elements and phrases were combined to form many patterns. The species used ten types of calls; i.e. alert, alarm, aggressive, begging, contact, courtship, distress, exciting, flying and invitation. Alert and contact calls were used more frequently than other calls. Aggressive, alarm and distress calls could be used for interspecific communication among some other bird species in the same habitats.

Combined Simulated Annealing and Genetic Algorithm to Solve Optimization Problems

Combinatorial optimization problems arise in many scientific and practical applications. Therefore many researchers try to find or improve different methods to solve these problems with high quality results and in less time. Genetic Algorithm (GA) and Simulated Annealing (SA) have been used to solve optimization problems. Both GA and SA search a solution space throughout a sequence of iterative states. However, there are also significant differences between them. The GA mechanism is parallel on a set of solutions and exchanges information using the crossover operation. SA works on a single solution at a time. In this work SA and GA are combined using new technique in order to overcome the disadvantages' of both algorithms.

The Influence of using Compost Leachate on Soil Reaction

In the area where the high quality water is not available, unconventional water sources are used to irrigate. Household leachate is one of the sources which are used in dry and semi dry areas in order to water the barer trees and plants. It meets the plants needs and also has some effects on the soil, but at the same time it might cause some problems as well. This study in order to evaluate the effect of using Compost leachate on the density of soil iron in form of a statistical pattern called ''Split Plot'' by using two main treatments, one subsidiary treatment and three repetitions of the pattern in a three month period. The main N treatments include: irrigation using well water as a blank treatments and the main I treatments include: irrigation using leachate and well water concurrently. Some subsidiary treatments were DI (Drop Irrigation) and SDI (Sub Drop Irrigation). Then in the established plots, 36 biannual pine and cypress shrubs were randomly grown. Two months later the treatment begins. The results revealed that there was a significant variation between the main treatment and the instance regarding pH decline in the soil which was related to the amount of leachate injected into the soil. After some time and using leachate the pH level fell, as much as 0.46 and also increased due to the great amounts of leachate. The underneath drop irrigation ends in better results than sub drop irrigation since it keeps the soil texture fixed.

Detecting Interactions between Behavioral Requirements with OWL and SWRL

High quality requirements analysis is one of the most crucial activities to ensure the success of a software project, so that requirements verification for software system becomes more and more important in Requirements Engineering (RE) and it is one of the most helpful strategies for improving the quality of software system. Related works show that requirement elicitation and analysis can be facilitated by ontological approaches and semantic web technologies. In this paper, we proposed a hybrid method which aims to verify requirements with structural and formal semantics to detect interactions. The proposed method is twofold: one is for modeling requirements with the semantic web language OWL, to construct a semantic context; the other is a set of interaction detection rules which are derived from scenario-based analysis and represented with semantic web rule language (SWRL). SWRL based rules are working with rule engines like Jess to reason in semantic context for requirements thus to detect interactions. The benefits of the proposed method lie in three aspects: the method (i) provides systematic steps for modeling requirements with an ontological approach, (ii) offers synergy of requirements elicitation and domain engineering for knowledge sharing, and (3)the proposed rules can systematically assist in requirements interaction detection.

Development Prospects of Education System in Modernization

the article analyzes the development prospects of education system in Kazakhstan. Education is among key sources of culture and social mobility. Modern education must become civic which means availability of high quality education to all people irrespective of their racial, ethnic, religious, social, gender and any other differences. Socially focused nature of modernization of Kazakhstan-s society is predicated upon formation of a civic education model in the future. Kazakhstan-s education system undergoes intensive reforms first of all intended to achieve international education standards and integration into the global educational and information space.

A Scatter Search and Help Policies Approaches for a New Mixed Model Assembly Lines Sequencing Problem

Mixed Model Production is the practice of assembling several distinct and different models of a product on the same assembly line without changeovers and then sequencing those models in a way that smoothes the demand for upstream components. In this paper, we consider an objective function which minimizes total stoppage time and total idle time and it is presented sequence dependent set up time. Many studies have been done on the mixed model assembly lines. But in this paper we specifically focused on reducing the idle times. This is possible through various help policies. For improving the solutions, some cases developed and about 40 tests problem was considered. We use scatter search for optimization and for showing the efficiency of our algorithm, experimental results shows behavior of method. Scatter search and help policies can produce high quality answers, so it has been used in this paper.

Micro-Penetrator for Canadian Planetary Exploration

Space exploration is a highly visible endeavour of humankind to seek profound answers to questions about the origins of our solar system, whether life exists beyond Earth, and how we could live on other worlds. Different platforms have been utilized in planetary exploration missions, such as orbiters, landers, rovers, and penetrators. Having low mass, good mechanical contact with the surface, ability to acquire high quality scientific subsurface data, and ability to be deployed in areas that may not be conducive to landers or rovers, Penetrators provide an alternative and complimentary solution that makes possible scientific exploration of hardly accessible sites (icy areas, gully sites, highlands etc.). The Canadian Space Agency (CSA) has put space exploration as one of the pillars of its space program, and established ExCo program to prepare Canada for future international planetary exploration. ExCo sets surface mobility as its focus and priority, and invests mainly in the development of rovers because of Canada's niche space robotics technology. Meanwhile, CSA is also investigating how micro-penetrators can help Canada to fulfill its scientific objectives for planetary exploration. This paper presents a review of the micro-penetrator technologies, past missions, and lessons learned. It gives a detailed analysis of the technical challenges of micro-penetrators, such as high impact survivability, high precision guidance navigation and control, thermal protection, communications, and etc. Then, a Canadian perspective of a possible micro-penetrator mission is given, including Canadian scientific objectives and priorities, potential instruments, and flight opportunities.

Towards a Suitable and Systematic Approach for Component Based Software Development

Software crisis refers to the situation in which the developers are not able to complete the projects within time and budget constraints and moreover these overscheduled and over budget projects are of low quality as well. Several methodologies have been adopted form time to time to overcome this situation and now in the focus is component based software engineering. In this approach, emphasis is on reuse of already existing software artifacts. But the results can not be achieved just by preaching the principles; they need to be practiced as well. This paper highlights some of the very basic elements of this approach, which has to be in place to get the desired goals of high quality, low cost with shorter time-to-market software products.

Design Histories for Enhanced Concurrent Structural Design

The leisure boatbuilding industry has tight profit margins that demand that boats are created to a high quality but with low cost. This requirement means reduced design times combined with increased use of design for production can lead to large benefits. The evolutionary nature of the boatbuilding industry can lead to a large usage of previous vessels in new designs. With the increase in automated tools for concurrent engineering within structural design it is important that these tools can reuse this information while subsequently feeding this to designers. The ability to accurately gather this materials and parts data is also a key component to these tools. This paper therefore aims to develop an architecture made up of neural networks and databases to feed information effectively to the designers based on previous design experience.

Minimization of Non-Productive Time during 2.5D Milling

In the modern manufacturing systems, the use of thermal cutting techniques using oxyfuel, plasma and laser have become indispensable for the shape forming of high quality complex components; however, the conventional chip removal production techniques still have its widespread space in the manufacturing industry. Both these types of machining operations require the positioning of end effector tool at the edge where the cutting process commences. This repositioning of the cutting tool in every machining operation is repeated several times and is termed as non-productive time or airtime motion. Minimization of this non-productive machining time plays an important role in mass production with high speed machining. As, the tool moves from one region to the other by rapid movement and visits a meticulous region once in the whole operation, hence the non-productive time can be minimized by synchronizing the tool movements. In this work, this problem is being formulated as a general travelling salesman problem (TSP) and a genetic algorithm approach has been applied to solve the same. For improving the efficiency of the algorithm, the GA has been hybridized with a noble special heuristic and simulating annealing (SA). In the present work a novel heuristic in the combination of GA has been developed for synchronization of toolpath movements during repositioning of the tool. A comparative analysis of new Meta heuristic techniques with simple genetic algorithm has been performed. The proposed metaheuristic approach shows better performance than simple genetic algorithm for minimization of nonproductive toolpath length. Also, the results obtained with the help of hybrid simulated annealing genetic algorithm (HSAGA) are also found better than the results using simple genetic algorithm only.

High Quality Speech Coding using Combined Parametric and Perceptual Modules

A novel approach to speech coding using the hybrid architecture is presented. Advantages of parametric and perceptual coding methods are utilized together in order to create a speech coding algorithm assuring better signal quality than in traditional CELP parametric codec. Two approaches are discussed. One is based on selection of voiced signal components that are encoded using parametric algorithm, unvoiced components that are encoded perceptually and transients that remain unencoded. The second approach uses perceptual encoding of the residual signal in CELP codec. The algorithm applied for precise transient selection is described. Signal quality achieved using the proposed hybrid codec is compared to quality of some standard speech codecs.

Rheological Modeling for Production of High Quality Polymeric

The fundamental defect inherent to the thermoforming technology is wall-thickness variation of the products due to inadequate thermal processing during production of polymer. A nonlinear viscoelastic rheological model is implemented for developing the process model. This model describes deformation process of a sheet in thermoforming process. Because of relaxation pause after plug-assist stage and also implementation of two stage thermoforming process have minor wall-thickness variation and consequently better mechanical properties of polymeric articles. For model validation, a comparative analysis of the theoretical and experimental data is presented.