Fairness and Quality of Service Issues and Analysis of IEEE 802.11e Wireless LAN

The IEEE 802.11e which is an enhanced version of the 802.11 WLAN standards incorporates the Quality of Service (QoS) which makes it a better choice for multimedia and real time applications. In this paper we study various aspects concerned with 802.11e standard. Further, the analysis results for this standard are compared with the legacy 802.11 standard. Simulation results show that IEEE 802.11e out performs legacy IEEE 802.11 in terms of quality of service due to its flow differentiated channel allocation and better queue management architecture. We also propose a method to improve the unfair allocation of bandwidth for downlink and uplink channels by varying the medium access priority level.

The Advent of Electronic Logbook Technology - Reducing Cost and Risk to Both Marine Resources and the Fishing Industry

Fisheries management all around the world is hampered by the lack, or poor quality, of critical data on fish resources and fishing operations. The main reasons for the chronic inability to collect good quality data during fishing operations is the culture of secrecy common among fishers and the lack of modern data gathering technology onboard most fishing vessels. In response, OLRAC-SPS, a South African company, developed fisheries datalogging software (eLog in short) and named it Olrac. The Olrac eLog solution is capable of collecting, analysing, plotting, mapping, reporting, tracing and transmitting all data related to fishing operations. Olrac can be used by skippers, fleet/company managers, offshore mariculture farmers, scientists, observers, compliance inspectors and fisheries management authorities. The authors believe that using eLog onboard fishing vessels has the potential to revolutionise the entire process of data collection and reporting during fishing operations and, if properly deployed and utilised, could transform the entire commercial fleet to a provider of good quality data and forever change the way fish resources are managed. In addition it will make it possible to trace catches back to the actual individual fishing operation, to improve fishing efficiency and to dramatically improve control of fishing operations and enforcement of fishing regulations.

Ghazal Ozon River and Preserving the Existent Aquatics While Constructing the Siazakh Dam

The main purpose of the dam is to control the surface streams and rivers across the country. Dam construction and formation of river and big water reservoirs and resources happen in the glen is a big incident which would change its surrounding area considerably. In fact, constructing a dam the glen width is close and fishes don't migrate from upstream to downstream and ultimately it would led to their death. To resolve this, it seems necessity to create a passage for fishes during the construction of dam. It is provided establishing a set of stepped pools overlooking each other as a fish way or fish ladder a proper pathway for moving fishes. In this article we first examine the surrounding environment and then Ghazal Ozon River and preserving the aquatics.

Theoretical Considerations for Software Component Metrics

We have defined two suites of metrics, which cover static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilisation of various components. In this paper both static and dynamic metrics are evaluated using Weyuker-s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall-s Quality Model and illustrate their impact on product quality and to the management of component-based product development.

Confronting the Uncertainty of Systemic Innovation in Public Welfare Services

Faced with social and health system capacity constraints and rising and changing demand for welfare services, governments and welfare providers are increasingly relying on innovation to help support and enhance services. However, the evidence reported by several studies indicates that the realization of that potential is not an easy task. Innovations can be deemed inherently complex to implement and operate, because many of them involve a combination of technological and organizational renewal within an environment featuring a diversity of stakeholders. Many public welfare service innovations are markedly systemic in their nature, which means that they emerge from, and must address, the complex interplay between political, administrative, technological, institutional and legal issues. This paper suggests that stakeholders dealing with systemic innovation in welfare services must deal with ambiguous and incomplete information in circumstances of uncertainty. Employing a literature review methodology and case study, this paper identifies, categorizes and discusses different aspects of the uncertainty of systemic innovation in public welfare services, and argues that uncertainty can be classified into eight categories: technological uncertainty, market uncertainty, regulatory/institutional uncertainty, social/political uncertainty, acceptance/legitimacy uncertainty, managerial uncertainty, timing uncertainty and consequence uncertainty.

Expressive Modes and Species of Language

Computer languages are usually lumped together into broad -paradigms-, leaving us in want of a finer classification of kinds of language. Theories distinguishing between -genuine differences- in language has been called for, and we propose that such differences can be observed through a notion of expressive mode. We outline this concept, propose how it could be operationalized and indicate a possible context for the development of a corresponding theory. Finally we consider a possible application in connection with evaluation of language revision. We illustrate this with a case, investigating possible revisions of the relational algebra in order to overcome weaknesses of the division operator in connection with universal queries.

Radiation Damage as Nonlinear Evolution of Complex System

Irradiated material is a typical example of a complex system with nonlinear coupling between its elements. During irradiation the radiation damage is developed and this development has bifurcations and qualitatively different kinds of behavior. The accumulation of primary defects in irradiated crystals is considered in frame work of nonlinear evolution of complex system. The thermo-concentration nonlinear feedback is carried out as a mechanism of self-oscillation development. It is shown that there are two ways of the defect density evolution under stationary irradiation. The first is the accumulation of defects; defect density monotonically grows and tends to its stationary state for some system parameters. Another way that takes place for opportune parameters is the development of self-oscillations of the defect density. The stationary state, its stability and type are found. The bifurcation values of parameters (environment temperature, defect generation rate, etc.) are obtained. The frequency of the selfoscillation and the conditions of their development is found and rated. It is shown that defect density, heat fluxes and temperature during self-oscillations can reach much higher values than the expected steady-state values. It can lead to a change of typical operation and an accident, e.g. for nuclear equipment.

Artificial Neural Networks for Identification and Control of a Lab-Scale Distillation Column Using LABVIEW

LABVIEW is a graphical programming language that has its roots in automation control and data acquisition. In this paper we have utilized this platform to provide a powerful toolset for process identification and control of nonlinear systems based on artificial neural networks (ANN). This tool has been applied to the monitoring and control of a lab-scale distillation column DELTALAB DC-SP. The proposed control scheme offers high speed of response for changes in set points and null stationary error for dual composition control and shows robustness in presence of externally imposed disturbance.

A Hybrid Search Algorithm for Solving Constraint Satisfaction Problems

In this paper we present a hybrid search algorithm for solving constraint satisfaction and optimization problems. This algorithm combines ideas of two basic approaches: complete and incomplete algorithms which also known as systematic search and local search algorithms. Different characteristics of systematic search and local search methods are complementary. Therefore we have tried to get the advantages of both approaches in the presented algorithm. The major advantage of presented algorithm is finding partial sound solution for complicated problems which their complete solution could not be found in a reasonable time. This algorithm results are compared with other algorithms using the well known n-queens problem.

An Improved Method to Watermark Images Sensitive to Blocking Artifacts

A new digital watermarking technique for images that are sensitive to blocking artifacts is presented. Experimental results show that the proposed MDCT based approach produces highly imperceptible watermarked images and is robust to attacks such as compression, noise, filtering and geometric transformations. The proposed MDCT watermarking technique is applied to fingerprints for ensuring security. The face image and demographic text data of an individual are used as multiple watermarks. An AFIS system was used to quantitatively evaluate the matching performance of the MDCT-based watermarked fingerprint. The high fingerprint matching scores show that the MDCT approach is resilient to blocking artifacts. The quality of the extracted face and extracted text images was computed using two human visual system metrics and the results show that the image quality was high.

Designing a Framework for Network Security Protection

As the Internet continues to grow at a rapid pace as the primary medium for communications and commerce and as telecommunication networks and systems continue to expand their global reach, digital information has become the most popular and important information resource and our dependence upon the underlying cyber infrastructure has been increasing significantly. Unfortunately, as our dependency has grown, so has the threat to the cyber infrastructure from spammers, attackers and criminal enterprises. In this paper, we propose a new machine learning based network intrusion detection framework for cyber security. The detection process of the framework consists of two stages: model construction and intrusion detection. In the model construction stage, a semi-supervised machine learning algorithm is applied to a collected set of network audit data to generate a profile of normal network behavior and in the intrusion detection stage, input network events are analyzed and compared with the patterns gathered in the profile, and some of them are then flagged as anomalies should these events are sufficiently far from the expected normal behavior. The proposed framework is particularly applicable to the situations where there is only a small amount of labeled network training data available, which is very typical in real world network environments.

Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

GPT Onto: A New Beginning for Malaysia Gross Pollutant Trap Ontology

Ontology is widely being used as a tool for organizing information, creating the relation between the subjects within the defined knowledge domain area. Various fields such as Civil, Biology, and Management have successful integrated ontology in decision support systems for managing domain knowledge and to assist their decision makers. Gross pollutant traps (GPT) are devices used in trapping and preventing large items or hazardous particles in polluting and entering our waterways. However choosing and determining GPT is a challenge in Malaysia as there are inadequate GPT data repositories being captured and shared. Hence ontology is needed to capture, organize and represent this knowledge into meaningful information which can be contributed to the efficiency of GPT selection in Malaysia urbanization. A GPT Ontology framework is therefore built as the first step to capture GPT knowledge which will then be integrated into the decision support system. This paper will provide several examples of the GPT ontology, and explain how it is constructed by using the Protégé tool.

Semi-Automatic Approach for Semantic Annotation

The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.

Similarity Detection in Collaborative Development of Object-Oriented Formal Specifications

The complexity of today-s software systems makes collaborative development necessary to accomplish tasks. Frameworks are necessary to allow developers perform their tasks independently yet collaboratively. Similarity detection is one of the major issues to consider when developing such frameworks. It allows developers to mine existing repositories when developing their own views of a software artifact, and it is necessary for identifying the correspondences between the views to allow merging them and checking their consistency. Due to the importance of the requirements specification stage in software development, this paper proposes a framework for collaborative development of Object- Oriented formal specifications along with a similarity detection approach to support the creation, merging and consistency checking of specifications. The paper also explores the impact of using additional concepts on improving the matching results. Finally, the proposed approach is empirically evaluated.

A Simplified Adaptive Decision Feedback Equalization Technique for π/4-DQPSK Signals

We present a simplified equalization technique for a π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated signal in a multipath fading environment. The proposed equalizer is realized as a fractionally spaced adaptive decision feedback equalizer (FS-ADFE), employing exponential step-size least mean square (LMS) algorithm as the adaptation technique. The main advantage of the scheme stems from the usage of exponential step-size LMS algorithm in the equalizer, which achieves similar convergence behavior as that of a recursive least squares (RLS) algorithm with significantly reduced computational complexity. To investigate the finite-precision performance of the proposed equalizer along with the π/4 -DQPSK modem, the entire system is evaluated on a 16-bit fixed point digital signal processor (DSP) environment. The proposed scheme is found to be attractive even for those cases where equalization is to be performed within a restricted number of training samples.

Elimination of Redundant Links in Web Pages– Mathematical Approach

With the enormous growth on the web, users get easily lost in the rich hyper structure. Thus developing user friendly and automated tools for providing relevant information without any redundant links to the users to cater to their needs is the primary task for the website owners. Most of the existing web mining algorithms have concentrated on finding frequent patterns while neglecting the less frequent one that are likely to contain the outlying data such as noise, irrelevant and redundant data. This paper proposes new algorithm for mining the web content by detecting the redundant links from the web documents using set theoretical(classical mathematics) such as subset, union, intersection etc,. Then the redundant links is removed from the original web content to get the required information by the user..

Construction of cDNALibrary and EST Analysis of Tenebriomolitorlarvae

Tofurther advance research on immune-related genes from T. molitor, we constructed acDNA library and analyzed expressed sequence taq (EST) sequences from 1,056 clones. After removing vector sequence and quality checkingthrough thePhred program (trim_alt 0.05 (P-score>20), 1039 sequences were generated. The average length of insert was 792 bp. In addition, we identified 162 clusters, 167 contigs and 391 contigs after clustering and assembling process using a TGICL package. EST sequences were searchedagainst NCBI nr database by local BLAST (blastx, E

Machine Vision for the Inspection of Surgical Tasks: Applications to Robotic Surgery Systems

The use of machine vision to inspect the outcome of surgical tasks is investigated, with the aim of incorporating this approach in robotic surgery systems. Machine vision is a non-contact form of inspection i.e. no part of the vision system is in direct contact with the patient, and is therefore well suited for surgery where sterility is an important consideration,. As a proof-of-concept, three primary surgical tasks for a common neurosurgical procedure were inspected using machine vision. Experiments were performed on cadaveric pig heads to simulate the two possible outcomes i.e. satisfactory or unsatisfactory, for tasks involved in making a burr hole, namely incision, retraction, and drilling. We identify low level image features to distinguish the two outcomes, as well as report on results that validate our proposed approach. The potential of using machine vision in a surgical environment, and the challenges that must be addressed, are identified and discussed.

A Learning Agent for Knowledge Extraction from an Active Semantic Network

This paper outlines the development of a learning retrieval agent. Task of this agent is to extract knowledge of the Active Semantic Network in respect to user-requests. Based on a reinforcement learning approach, the agent learns to interpret the user-s intention. Especially, the learning algorithm focuses on the retrieval of complex long distant relations. Increasing its learnt knowledge with every request-result-evaluation sequence, the agent enhances his capability in finding the intended information.