A Comparative Analysis of E-Government Quality Models

Many quality models have been used to measure egovernment portals quality. However, the absence of an international consensus for e-government portals quality models results in many differences in terms of quality attributes and measures. The aim of this paper is to compare and analyze the existing e-government quality models proposed in literature (those that are based on ISO standards and those that are not) in order to propose guidelines to build a good and useful e-government portals quality model. Our findings show that, there is no e-government portal quality model based on the new international standard ISO 25010. Besides that, the quality models are not based on a best practice model to allow agencies to both; measure e-government portals quality and identify missing best practices for those portals.

Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays

In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.

Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways

The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore, a single PCE value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (that is the distance between rear bumpers of two vehicles in a traffic stream) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-leastsquares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.

Reference Management Software: Comparative Analysis of RefWorks and Zotero

This paper presents a comparison of reference management software between RefWorks and Zotero. The results were drawn by comparing two software and the novelty of this paper is the comparative analysis of software and it has shown that ReftWorks can import more information from the Google Scholar for the researchers. This finding could help to know researchers to use the reference management software.

A Comparative Analysis of Zotero and Mendeley Reference Management Software

This paper presents a comparison of the reference management software between Zotero and Mendeley and the results were drawn by comparing the two software’s. The novelty of this paper is the comparative analysis of the software and it has shown that Mendeley can import more information from the Google Scholar for the researchers. This finding can help to know researchers to use the reference management software.

Comparative Analysis of Diverse Collection of Big Data Analytics Tools

Over the past era, there have been a lot of efforts and studies are carried out in growing proficient tools for performing various tasks in big data. Recently big data have gotten a lot of publicity for their good reasons. Due to the large and complex collection of datasets it is difficult to process on traditional data processing applications. This concern turns to be further mandatory for producing various tools in big data. Moreover, the main aim of big data analytics is to utilize the advanced analytic techniques besides very huge, different datasets which contain diverse sizes from terabytes to zettabytes and diverse types such as structured or unstructured and batch or streaming. Big data is useful for data sets where their size or type is away from the capability of traditional relational databases for capturing, managing and processing the data with low-latency. Thus the out coming challenges tend to the occurrence of powerful big data tools. In this survey, a various collection of big data tools are illustrated and also compared with the salient features.

Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy

In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.

A Comprehensive Review on Different Mixed Data Clustering Ensemble Methods

An extensive amount of work has been done in data clustering research under the unsupervised learning technique in Data Mining during the past two decades. Moreover, several approaches and methods have been emerged focusing on clustering diverse data types, features of cluster models and similarity rates of clusters. However, none of the single clustering algorithm exemplifies its best nature in extracting efficient clusters. Consequently, in order to rectify this issue, a new challenging technique called Cluster Ensemble method was bloomed. This new approach tends to be the alternative method for the cluster analysis problem. The main objective of the Cluster Ensemble is to aggregate the diverse clustering solutions in such a way to attain accuracy and also to improve the eminence the individual clustering algorithms. Due to the massive and rapid development of new methods in the globe of data mining, it is highly mandatory to scrutinize a vital analysis of existing techniques and the future novelty. This paper shows the comparative analysis of different cluster ensemble methods along with their methodologies and salient features. Henceforth this unambiguous analysis will be very useful for the society of clustering experts and also helps in deciding the most appropriate one to resolve the problem in hand.

Zero Carbon & Low Energy Housing; Comparative Analysis of Two Persian Vernacular Architectural Solutions to Increase Energy Efficiency

In order to respond the human needs, all regional, social, and economical factors are available to gain residents’ comfort and ideal architecture. There is no doubt the thermal comfort has to satisfy people not only for daily and physical activities but also creating pleasant area for mental activities and relaxing. It costs energy and increases greenhouse gas emissions. Reducing energy use in buildings is a critical component of meeting carbon reduction commitments. Hence housing design represents a major opportunity to cut energy use and CO2 emissions. In terms of energy efficiency, it is vital to propose and research modern design methods for buildings however vernacular architecture techniques are proven empirical existing practices which have to be considered. This research tries to compare two architectural solution were proposed by Persian vernacular architecture, to achieve energy efficiency in hot areas. The aim of this research is to analyze two forms of traditional Persian architecture in different locations in order to develop a systematic research and sustainable technologies on adaptation to contemporary living standards.

A CFD Analysis of Hydraulic Characteristics of the Rod Bundles in the BREST-OD-300 Wire-Spaced Fuel Assemblies

This paper presents the findings from a numerical simulation of the flow in 37-rod fuel assembly models spaced by a double-wire trapezoidal wrapping as applied to the BREST-OD-300 experimental nuclear reactor. Data on a high static pressure distribution within the models, and equations for determining the fuel bundle flow friction factors have been obtained. Recommendations are provided on using the closing turbulence models available in the ANSYS Fluent. A comparative analysis has been performed against the existing empirical equations for determining the flow friction factors. The calculated and experimental data fit has been shown. An analysis into the experimental data and results of the numerical simulation of the BREST-OD-300 fuel rod assembly hydrodynamic performance are presented.

The Comparative Analysis of Micro-reading and Traditional Reading Based On Schema Theory

Micro-reading is a new way of reading depended on short messages of mobile phones, network articles and short literary forms, which impacts greatly on traditional way of reading. The effect of "micro-reading" is deeper especially for those growing middle school students and college students. Aiming at the problem with the development of college students' micro-reading and based on the influence of schema theory on the research of cognition of reading, this paper is to analyze the comparison between micro-reading and traditional reading and explore reading strategies in micro-era based on the negative and positive effect which schema theory has on micro-reading.

Effects of SRT and HRT on Treatment Performance of MBR and Membrane Fouling

40L of hollow fiber membrane bioreactor with solids retention times (SRT) of 30, 15 and 4 days were setup for treating synthetic wastewater at hydraulic retention times (HRT) of 12, 8 and 4 hours. The objectives of the study were to investigate the effects of SRT and HRT on membrane fouling. A comparative analysis was carried out for physiochemical quality parameters (turbidity, suspended solids, COD, NH3-N and PO43-). Scanning electron microscopy (SEM), energy diffusive X-ray (EDX) analyzer and particle size distribution (PSD) were used to characterize the membrane fouling properties. The influence of SRT on the quality of effluent, activated sludge quality, and membrane fouling were also correlated. Lower membrane fouling and slower rise in trans-membrane pressure (TMP) were noticed at the longest SRT and HRT of 30d and 12h, respectively. Increasing SRT results in noticeable reduction of dissolved organic matters. The best removal efficiencies of COD, TSS, NH3-N and PO43- were 93%, 98%, 80% and 30% respectively. The high HRT with shorter SRT induced faster fouling rate. The main fouling resistance was cake layer. The most severe membrane fouling was observed at SRT and HRT of 4 and 12, respectively with thickness cake layer of 17mm as reflected by higher TMP, lower effluent removal and thick sludge cake layer.  

Transcriptional Evidence for the Involvement of MyD88 in Flagellin Recognition: Genomic Identification of Rock Bream MyD88 and Comparative Analysis

The MyD88 is an evolutionarily conserved host-expressed adaptor protein that is essential for proper TLR/ IL1R immune-response signaling. A previously identified complete cDNA (1626 bp) of OfMyD88 comprised an ORF of 867 bp encoding a protein of 288 amino acids (32.9 kDa). The gDNA (3761 bp) of OfMyD88 revealed a quinquepartite genome organization composed of 5 exons (with the sizes of 310, 132, 178, 92 and 155 bp) separated by 4 introns. All the introns displayed splice signals consistent with the consensus GT/AG rule. A bipartite domain structure with two domains namely death domain (24-103) coded by 1st exon, and TIR domain (151-288) coded by last 3 exons were identified through in silico analysis. Moreover, homology modeling of these two domains revealed a similar quaternary folding nature between human and rock bream homologs. A comprehensive comparison of vertebrate MyD88 genes showed that they possess a 5-exonic structure.In this structure, the last three exons were strongly conserved, and this suggests that a rigid structure has been maintained during vertebrate evolution.A cluster of TATA box-like sequences were found 0.25 kb upstream of cDNA starting position. In addition, putative 5'-flanking region of OfMyD88 was predicted to have TFBS implicated with TLR signaling, including copies of NFkB1, APRF/ STAT3, Sp1, IRF1 and 2 and Stat1/2. Using qPCR technique, a ubiquitous mRNA expression was detected in liver and blood. Furthermore, a significantly up-regulated transcriptional expression of OfMyD88 was detected in head kidney (12-24 h; >2-fold), spleen (6 h; 1.5-fold), liver (3 h; 1.9-fold) and intestine (24 h; ~2-fold) post-Fla challenge. These data suggest a crucial role for MyD88 in antibacterial immunity of teleosts.

Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Comparative Analysis of DTC Based Switched Reluctance Motor Drive Using Torque Equation and FEA Models

Since torque ripple is the main cause of noise and vibrations, the performance of Switched Reluctance Motor (SRM) can be improved by minimizing its torque ripple using a novel control technique called Direct Torque Control (DTC). In DTC technique, torque is controlled directly through control of magnitude of the flux and change in speed of the stator flux vector. The flux and torque are maintained within set hysteresis bands. The DTC of SRM is analyzed by two methods. In one method, the actual torque is computed by conducting Finite Element Analysis (FEA) on the design specifications of the motor. In the other method, the torque is computed by Simplified Torque Equation. The variation of peak current, average current, torque ripple and speed settling time with Simplified Torque Equation model is compared with FEA based model.

A Comprehensive Survey and Comparative Analysis of Black Hole Attack in Mobile Ad Hoc Network

A Mobile Ad-hoc Network (MANET) is a self managing network consists of versatile nodes that are capable of communicating with each other without having any fixed infrastructure. These nodes may be routers and/or hosts. Due to this dynamic nature of the network, routing protocols are vulnerable to various kinds of attacks. The black hole attack is one of the conspicuous security threats in MANETs. As the route discovery process is obligatory and customary, attackers make use of this loophole to get success in their motives to destruct the network. In Black hole attack the packet is redirected to a node that actually does not exist in the network. Many researchers have proposed different techniques to detect and prevent this type of attack. In this paper, we have analyzed various routing protocols in this context. Further we have shown a critical comparison among various protocols. We have shown various routing metrics are required proper and significant analysis of the protocol.

On the Representation of Actuator Faults Diagnosis and Systems Invertibility

In this work, the main problem considered is the  detection and the isolation of the actuator fault. A new formulation of  the linear system is generated to obtain the conditions of the actuator  fault diagnosis. The proposed method is based on the representation  of the actuator as a subsystem connected with the process system in  cascade manner. The designed formulation is generated to obtain the  conditions of the actuator fault detection and isolation. Detectability  conditions are expressed in terms of the invertibility notions. An  example and a comparative analysis with the classic formulation  illustrate the performances of such approach for simple actuator fault  diagnosis by using the linear model of nuclear reactor.  

A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods

Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.

Comparative Analysis of Turbulent Plane Jets from a Sharp-Edged Orifice, a Beveled-Edge Orifice and a Radially Contoured Nozzle

This article investigates through experiments the flow characteristics of plane jets from sharp-edged orifice-plate, beveled-edge and radially contoured nozzle. The first two configurations exhibit saddle-backed velocity profiles while the third shows a top-hat. A vena contracta is found for the jet emanating from orifice at x/h » 3 while the contoured case displays a potential core extending to the range x/h = 5. A spurt in jet pressure on the centerline supports vena contracta for the orifice-jet. Momentum thicknesses and integral length scales elongate linearly with x although the growth of the shear-layer and large-scale eddies for the orifice are greater than the contoured case. The near-field spectrum exhibits higher frequency of the primary eddies that concur with enhanced turbulence intensity. Importantly, highly “turbulent” state of the orifice-jet prevails in the far-field where the spectra confirm more energetic secondary eddies associated with greater flapping amplitude of the orifice-jet.

Thermochemical Conversion: Jatropha curcus in Fixed Bed Reactor Using Slow Pyrolysis

Thermochemical conversion of non-edible biomass offers an efficient and economically process to provide valuable fuels and prepare chemicals derived from biomass in the context of developing countries. Pyrolysis has advantages over other thermochemical conversion techniques because it can convert biomass directly into solid, liquid and gaseous products by thermal decomposition of biomass in the absence of oxygen. The present paper aims to focus on the slow thermochemical conversion processes for non-edible Jatropha curcus seed cake. The present discussion focuses on the effect of nitrogen gas flow rate on products composition (wt %). In addition, comparative analysis has been performed for different mesh size for product composition. Result shows that, slow pyrolysis experiments of Jatropha curcus seed cake in fixed bed reactor yield the bio-oil 18.42 wt % at a pyrolysis temperature of 500°C, particle size of -6+8 mesh number and nitrogen gas flow rate of 150 ml/min.