An Evaluation of the Usability of IT Faculty Educational Portal at University of Benghazi

Evaluation of educational portals is an important subject area that needs more attention from researchers. A university that has an educational portal which is difficult to use and interact by teachers or students or management staff can reduce the position and reputation of the university. Therefore, it is important to have the ability to make an evaluation of the quality of e-services the university provide to improve them over time. The present study evaluates the usability of the Information Technology Faculty portal at University of Benghazi. Two evaluation methods were used: a questionnaire-based method and an online automated tool-based method. The first method was used to measure the portal's external attributes of usability (Information, Content and Organization of the portal, Navigation, Links and Accessibility, Aesthetic and Visual Appeal, Performance and Effectiveness and educational purpose) from users' perspectives, while the second method was used to measure the portal's internal attributes of usability (number and size of HTML files, number and size of images, load time, HTML check errors, browsers compatibility problems, number of bad and broken links), which cannot be perceived by the users. The study showed that some of the usability aspects have been found at the acceptable level of performance and quality, and some others have been found otherwise. In general, it was concluded that the usability of IT faculty educational portal generally acceptable. Recommendations and suggestions to improve the weakness and quality of the portal usability are presented in this study.

Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading

Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.

Coded Transmission in Synthetic Transmit Aperture Ultrasound Imaging Method

The paper presents the study of synthetic transmit aperture method applying the Golay coded transmission for medical ultrasound imaging. Longer coded excitation allows to increase the total energy of the transmitted signal without increasing the peak pressure. Signal-to-noise ratio and penetration depth are improved maintaining high ultrasound image resolution. In the work the 128-element linear transducer array with 0.3 mm inter-element spacing excited by one cycle and the 8 and 16-bit Golay coded sequences at nominal frequencies 4 MHz was used. Single element transmission aperture was used to generate a spherical wave covering the full image region and all the elements received the echo signals. The comparison of 2D ultrasound images of the wire phantom as well as of the tissue mimicking phantom is presented to demonstrate the benefits of the coded transmission. The results were obtained using the synthetic aperture algorithm with transmit and receive signals correction based on a single element directivity function.

Evaluation of a Multi-Resolution Dyadic Wavelet Transform Method for usable Speech Detection

Many applications of speech communication and speaker identification suffer from the problem of co-channel speech. This paper deals with a multi-resolution dyadic wavelet transform method for usable segments of co-channel speech detection that could be processed by a speaker identification system. Evaluation of this method is performed on TIMIT database referring to the Target to Interferer Ratio measure. Co-channel speech is constructed by mixing all possible gender speakers. Results do not show much difference for different mixtures. For the overall mixtures 95.76% of usable speech is correctly detected with false alarms of 29.65%.

Intuition Operator: Providing Genomes with Reason

In this contribution, the use of a new genetic operator is proposed. The main advantage of using this operator is that it is able to assist the evolution procedure to converge faster towards the optimal solution of a problem. This new genetic operator is called ''intuition'' operator. Generally speaking, one can claim that this operator is a way to include any heuristic or any other local knowledge, concerning the problem, that cannot be embedded in the fitness function. Simulation results show that the use of this operator increases significantly the performance of the classic Genetic Algorithm by increasing the convergence speed of its population.

Network Coding-based ARQ scheme with Overlapping Selection for Resource Limited Multicast/Broadcast Services

Network coding has recently attracted attention as an efficient technique in multicast/broadcast services. The problem of finding the optimal network coding mechanism maximizing the bandwidth efficiency is hard to solve and hard to approximate. Lots of network coding-based schemes have been suggested in the literature to improve the bandwidth efficiency, especially network coding-based automatic repeat request (NCARQ) schemes. However, existing schemes have several limitations which cause the performance degradation in resource limited systems. To improve the performance in resource limited systems, we propose NCARQ with overlapping selection (OS-NCARQ) scheme. The advantages of OS-NCARQ scheme over the traditional ARQ scheme and existing NCARQ schemes are shown through the analysis and simulations.

Economical Analysis of Thermal Energy Storage by Partially Operation

Building Sector is the major electricity consumer and it is costly to building owners. Therefore the application of thermal energy storage (TES) has gained attractive to reduce energy cost. Many attractive tariff packages are being offered by the electricity provider to promote TES. The tariff packages offered higher cost of electricity during peak period and lower cost of electricity during off peak period. This paper presented the return of initial investment by implementing a centralized air-conditioning plant integrated with thermal energy storage with partially operation strategies. Building load profile will be calculated hourly according to building specification and building usage trend. TES operation conditions will be designed according to building load demand profile, storage capacity, tariff packages and peak/off peak period. The Payback Period analysis method was used to evaluate economic analysis. The investment is considered a good investment where by the initial cost is recovered less than ten than seven years.

Regional Differences in the Effect of Immigration on Poverty Rates in Spain

This paper explores the extent of the gap in poverty rates between immigrant and native households in Spanish regions and assess to what extent regional differences in individual and contextual characteristics can explain the divergences in such a gap. By using multilevel techniques and European Union Survey on Income and Living Conditions, we estimate immigrant households experiments an increase of 76 per cent in the odds of being poor compared with a native one when we control by individual variables. In relation to regional differences in the risk of poverty, regionallevel variables have higher effect in the reduction of these differences than individual variables.

An Efficient Data Collection Approach for Wireless Sensor Networks

One of the most important applications of wireless sensor networks is data collection. This paper proposes as efficient approach for data collection in wireless sensor networks by introducing Member Forward List. This list includes the nodes with highest priority for forwarding the data. When a node fails or dies, this list is used to select the next node with higher priority. The benefit of this node is that it prevents the algorithm from repeating when a node fails or dies. The results show that Member Forward List decreases power consumption and latency in wireless sensor networks.

CASTE: a Cloud-Based Automatic Software Test Environment

This paper presents the design and implementation of CASTE, a Cloud-based automatic software test environment. We first present the architecture of CASTE, then the main packages and classes of it are described in detail. CASTE is built upon a private Infrastructure as a Service platform. Through concentrated resource management of virtualized testing environment and automatic execution control of test scripts, we get a better solution to the testing resource utilization and test automation problem. Experiments on CASTE give very appealing results.

Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers.

Formulation Development and Moiturising Effects of a Topical Cream of Aloe vera Extract

This study was designed to formulate, pharmaceutically evaluate a topical skin-care cream (w/o emulsion) of Aloe Vera versus its vehicle (Base) as control and determine their effects on Stratum Corneum (SC) water content and Transepidermal water loss (TEWL). Base containing no extract and a Formulation containing 3% concentrated extract of Aloe Vera was developed by entrapping in the inner aqueous phase of w/o emulsion (cream). Lemon oil was incorporated to improve the odor. Both the Base and Formulation were stored at 8°C ±0.1°C (in refrigerator), 25°C±0.1°C, 40°C±0.1°C and 40°C± 0.1°C with 75% RH (in incubator) for a period of 4 weeks to predict their stability. The evaluation parameters consisted of color, smell, type of emulsion, phase separation, electrical conductivity, centrifugation, liquefaction and pH. Both the Base and Formulation were applied to the cheeks of 21 healthy human volunteers for a period of 8 weeks Stratum corneum (SC) water content and Transepidermal water loss (TEWL) were monitored every week to measure any effect produced by these topical creams. The expected organoleptic stability of creams was achieved from 4 weeks in-vitro study period. Odor was disappeared with the passage of time due to volatilization of lemon oil. Both the Base and Formulation produced significant (p≤0.05) changes in TEWL with respect to time. SC water content was significantly (p≤0.05) increased by the Formulation while the Base has insignificant (p 0.05) effects on SC water content. The newly formulated cream of Aloe Vera, applied is suitable for improvement and quantitative monitoring of skin hydration level (SC water content/ moisturizing effects) and reducing TEWL in people with dry skin.

The Sustainable Value Model: Comparative Analysis Romania and EU

For Romania, the fulfilment of the obligations undertaken as a member state of the European Union in accordance with the Treaty of Accession requires the effective implementation of sustainable development principles and practices, this being the only reasonable development option, which adequately draws in on the economic, social and environment resources. Achieving this objective is based on a profound analysis of the realities in the Romanian economy, which will reflect the existent situation and the action directions for the future. The paper presents an analysis of the Romanian economic performances compared to the EU economy, based on the sustainable value (SV) model. The analysis highlighted the considerable gap between Romania and the EU regarding the sustainable capitalization of resources, the provided information being useful to justify strategic development decisions at a micro and macro levels.

Detection, Tracking and Classification of Vehicles and Aircraft based on Magnetic Sensing Technology

Existing ground movement surveillance technologies at airports are subjected to limitations due to shadowing effects or multiple reflections. Therefore, there is a strong demand for a new sensing technology, which will be cost effective and will provide detection of non-cooperative targets under any weather conditions. This paper aims to present a new intelligent system, developed within the framework of the EC-funded ISMAEL project, which is based on a new magnetic sensing technology and provides detection, tracking and automatic classification of targets moving on the airport surface. The system is currently being installed at two European airports. Initial experimental results under real airport traffic demonstrate the great potential of the proposed system.

Packet Forwarding with Multiprotocol Label Switching

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

A NXM Version of 5X5 Playfair Cipher for any Natural Language (Urdu as Special Case)

In this paper a modified version NXM of traditional 5X5 playfair cipher is introduced which enable the user to encrypt message of any Natural language by taking appropriate size of the matrix depending upon the size of the natural language. 5X5 matrix has the capability of storing only 26 characters of English language and unable to store characters of any language having more than 26 characters. To overcome this limitation NXM matrix is introduced which solve this limitation. In this paper a special case of Urdu language is discussed. Where # is used for completing odd pair and * is used for repeating letters.

Influence of Ambiguity Cluster on Quality Improvement in Image Compression

Image coding based on clustering provides immediate access to targeted features of interest in a high quality decoded image. This approach is useful for intelligent devices, as well as for multimedia content-based description standards. The result of image clustering cannot be precise in some positions especially on pixels with edge information which produce ambiguity among the clusters. Even with a good enhancement operator based on PDE, the quality of the decoded image will highly depend on the clustering process. In this paper, we introduce an ambiguity cluster in image coding to represent pixels with vagueness properties. The presence of such cluster allows preserving some details inherent to edges as well for uncertain pixels. It will also be very useful during the decoding phase in which an anisotropic diffusion operator, such as Perona-Malik, enhances the quality of the restored image. This work also offers a comparative study to demonstrate the effectiveness of a fuzzy clustering technique in detecting the ambiguity cluster without losing lot of the essential image information. Several experiments have been carried out to demonstrate the usefulness of ambiguity concept in image compression. The coding results and the performance of the proposed algorithms are discussed in terms of the peak signal-tonoise ratio and the quantity of ambiguous pixels.

Are Asia-Pacific Stock Markets Predictable? Evidence from Wavelet-based Fractional Integration Estimator

This paper examines predictability in stock return in developed and emergingmarkets by testing long memory in stock returns using wavelet approach. Wavelet-based maximum likelihood estimator of the fractional integration estimator is superior to the conventional Hurst exponent and Geweke and Porter-Hudak estimator in terms of asymptotic properties and mean squared error. We use 4-year moving windows to estimate the fractional integration parameter. Evidence suggests that stock return may not be predictable indeveloped countries of the Asia-Pacificregion. However, predictability of stock return insome developing countries in this region such as Indonesia, Malaysia and Philippines may not be ruled out. Stock return in the Thailand stock market appears to be not predictable after the political crisis in 2008.

A Study on the Location and Range of Obstacle Region in Robot's Point Placement Task based on the Vision Control Algorithm

This paper is concerned with the application of the vision control algorithm for robot's point placement task in discontinuous trajectory caused by obstacle. The presented vision control algorithm consists of four models, which are the robot kinematic model, vision system model, parameters estimation model, and robot joint angle estimation model.When the robot moves toward a target along discontinuous trajectory, several types of obstacles appear in two obstacle regions. Then, this study is to investigate how these changes will affect the presented vision control algorithm.Thus, the practicality of the vision control algorithm is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.

Alignment of Emission Gamma Ray Sources with Nai(Ti) Scintillation Detectors by Two Laser Beams to Pre-Operation using Alternating Minimization Technique

Accurate timing alignment and stability is important to maximize the true counts and minimize the random counts in positron emission tomography So signals output from detectors must be centering with the two isotopes to pre-operation and fed signals into four units of pulse-processing units, each unit can accept up to eight inputs. The dual source computed tomography consist two units on the left for 15 detector signals of Cs-137 isotope and two units on the right are for 15 detectors signals of Co-60 isotope. The gamma spectrum consisting of either single or multiple photo peaks. This allows for the use of energy discrimination electronic hardware associated with the data acquisition system to acquire photon counts data with a specific energy, even if poor energy resolution detectors are used. This also helps to avoid counting of the Compton scatter counts especially if a single discrete gamma photo peak is emitted by the source as in the case of Cs-137. In this study the polyenergetic version of the alternating minimization algorithm is applied to the dual energy gamma computed tomography problem.