Abstract: The optimal operation of proton exchange membrane fuel cell (PEMFC) requires good water management which is presented under two forms vapor and liquid. Moreover, fuel cells have to reach higher output require integration of some accessories which need electrical power. In order to analyze fuel cells operation and different species transport phenomena a biphasic mathematical model is presented by governing equations set. The numerical solution of these conservation equations is calculated by Matlab program. A multi-criteria optimization with weighting between two opposite objectives is used to determine the compromise solutions between maximum output and minimal stack size. The obtained results are in good agreement with available literature data.
Abstract: The worldwide prevalence of H3N2 influenza virus
and its increasing resistance to the existing drugs necessitates for the
development of an improved/better targeting anti-influenza drug.
H3N2 influenza neuraminidase is one of the two membrane-bound
proteins belonging to group-2 neuraminidases. It acts as key player
involved in viral pathogenicity and hence, is an important target of
anti-influenza drugs. Oseltamivir is one of the potent drugs targeting
this neuraminidase. In the present work, we have taken subtype N2
neuraminidase as the receptor and probable analogs of oseltamivir as
drug molecules to study the protein-drug interaction in anticipation of
finding efficient modified candidate compound. Oseltamivir analogs
were made by modifying the functional groups using Marvin Sketch
software and were docked using Schrodinger-s Glide. Oseltamivir
analog 10 was detected to have significant energy value (16% less
compared to Oseltamivir) and could be the probable lead molecule. It
infers that some of the modified compounds can interact in a novel
manner with increased hydrogen bonding at the active site of
neuraminidase and it might be better than the original drug. Further
work can be carried out such as enzymatic inhibition studies;
synthesis and crystallizing the drug-target complex to analyze the
interactions biologically.
Abstract: Scarcity of resources for biodiversity conservation gives rise to the need of strategic investment with priorities given to the cost of conservation. While the literature provides abundant methodological options for biodiversity conservation; estimating true cost of conservation remains abstract and simplistic, without recognising dynamic nature of the cost. Some recent works demonstrate the prominence of economic theory to inform biodiversity decisions, particularly on the costs and benefits of biodiversity however, the integration of the concept of true cost into biodiversity actions and planning are very slow to come by, and specially on a farm level. Conservation planning studies often use area as a proxy for costs neglecting different land values as well as protected areas. These literature consider only heterogeneous benefits while land costs are considered homogenous. Analysis with the assumption of cost homogeneity results in biased estimation; since not only it doesn’t address the true total cost of biodiversity actions and plans, but also it fails to screen out lands that are more (or less) expensive and/or difficult (or more suitable) for biodiversity conservation purposes, hindering validity and comparability of the results. Economies of scope” is one of the other most neglected aspects in conservation literature. The concept of economies of scope introduces the existence of cost complementarities within a multiple output production system and it suggests a lower cost during the concurrent production of multiple outputs by a given farm. If there are, indeed, economies of scope then simplistic representation of costs will tend to overestimate the true cost of conservation leading to suboptimal outcomes. The aim of this paper, therefore, is to provide first road review of the various theoretical ways in which economies of scope are likely to occur of how they might occur in conservation. Consequently, the paper addresses gaps that have to be filled in future analysis.
Abstract: Stochastic comparison has been an important
direction of research in various area. This can be done by the use of
the notion of stochastic ordering which gives qualitatitive rather than
purely quantitative estimation of the system under study. In this
paper we present applications of comparison based uncertainty
related to entropy in Reliability analysis, for example to design
better systems. These results can be used as a priori information in
simulation studies.
Abstract: The operating control parameters of injection
flushing type of electrical discharge machining process on stainless
steel 304 workpiece with copper tools are being optimized
according to its individual machining characteristic i.e. material
removal rate (MRR). Lower MRR during EDM machining process
may decrease its- machining productivity. Hence, the quality
characteristic for MRR is set to higher-the-better to achieve the
optimum machining productivity. Taguchi method has been used
for the construction, layout and analysis of the experiment for each
of the machining characteristic for the MRR. The use of Taguchi
method in the experiment saves a lot of time and cost of preparing
and machining the experiment samples. Therefore, an L18
Orthogonal array which was the fundamental component in the
statistical design of experiments has been used to plan the
experiments and Analysis of Variance (ANOVA) is used to
determine the optimum machining parameters for this machining
characteristic. The control parameters selected for this
optimization experiments are polarity, pulse on duration, discharge
current, discharge voltage, machining depth, machining diameter
and dielectric liquid pressure. The result had shown that the higher
the discharge voltage, the higher will be the MRR.
Abstract: The nanotechnology based on epitaxial systems
includes single or arranged misfit dislocations. In general, whatever
is the type of dislocation or the geometry of the array formed by the
dislocations; it is important for experimental studies to know exactly
the stress distribution for which there is no analytical expression [1,
2]. This work, using a numerical analysis, deals with relaxation of
epitaxial layers having at their interface a periodic network of edge
misfit dislocations. The stress distribution is estimated by using
isotropic elasticity. The results show that the thickness of the two
sheets is a crucial parameter in the stress distributions and then in the
profile of the two sheets.
A comparative study between the case of single dislocation and
the case of parallel network shows that the layers relaxed better when
the interface is covered by a parallel arrangement of misfit.
Consequently, a single dislocation at the interface produces an
important stress field which can be reduced by inserting a parallel
network of dislocations with suitable periodicity.
Abstract: Nowadays due to globalization of economy and
competition environment, innovation and technology plays key role
at creation of wealth and economic growth of countries. In fact
prompt growth of practical and technologic knowledge may results in
social benefits for countries when changes into effective innovation.
Considering the importance of innovation for the development of
countries, this study addresses the radical technological innovation
introduced by nanopapers at different stages of producing paper
including stock preparation, using authorized additives, fillers and
pigments, using retention, calender, stages of producing conductive
paper, porous nanopaper and Layer by layer self-assembly. Research
results show that in coming years the jungle related products will lose
considerable portion of their market share, unless embracing radical
innovation. Although incremental innovations can make this industry
still competitive in mid-term, but to have economic growth and
competitive advantage in long term, radical innovations are
necessary. Radical innovations can lead to new products and
materials which their applications in packaging industry can produce
value added. However application of nanotechnology in this industry
can be costly, it can be done in cooperation with other industries to
make the maximum use of nanotechnology possible. Therefore this
technology can be used in all the production process resulting in the
mass production of simple and flexible papers with low cost and
special properties such as facility at shape, form, easy transportation,
light weight, recovery and recycle marketing abilities, and sealing.
Improving the resistance of the packaging materials without reducing
the performance of packaging materials enhances the quality and the
value added of packaging. Improving the cellulose at nano scale can
have considerable electron optical and magnetic effects leading to
improvement in packaging and value added. Comparing to the
specifications of thermoplastic products and ordinary papers,
nanopapers show much better performance in terms of effective
mechanical indexes such as the modulus of elasticity, tensile strength,
and strain-stress. In densities lower than 640 kgm -3, due to the
network structure of nanofibers and the balanced and randomized
distribution of NFC in flat space, these specifications will even
improve more. For nanopapers, strains are 1,4Gpa, 84Mpa and 17%,
13,3 Gpa, 214Mpa and 10% respectively. In layer by layer self
assembly method (LbL) the tensile strength of nanopaper with Tio3
particles and Sio2 and halloysite clay nanotube are 30,4 ±7.6Nm/g
and 13,6 ±0.8Nm/g and 14±0.3,3Nm/g respectively that fall within
acceptable range of similar samples with virgin fiber. The usage of
improved brightness and porosity index in nanopapers can create
more competitive advantages at packaging industry.
Abstract: E-services have significantly changed the way of
doing business in recent years. We can, however, observe poor use of
these services. There is a large gap between supply and actual eservices
usage. This is why we started a project to provide an
environment that will encourage the use of e-services. We believe
that only providing e-service does not automatically mean consumers
would use them. This paper shows the origins of our project and its
current position. We discuss the decision of using semantic web
technologies and their potential to improve e-services usage. We also
present current knowledge base and its real-world classification. In the paper, we discuss further work to be done in the project. Current
state of the project is promising.
Abstract: In this paper, we propose a new robust and secure
system that is based on the combination between two different
transforms Discrete wavelet Transform (DWT) and Contourlet
Transform (CT). The combined transforms will compensate the
drawback of using each transform separately. The proposed
algorithm has been designed, implemented and tested successfully.
The experimental results showed that selecting the best sub-band for
embedding from both transforms will improve the imperceptibility
and robustness of the new combined algorithm. The evaluated
imperceptibility of the combined DWT-CT algorithm which gave a
PSNR value 88.11 and the combination DWT-CT algorithm
improves robustness since it produced better robust against Gaussian
noise attack. In addition to that, the implemented system shored a
successful extraction method to extract watermark efficiently.
Abstract: This paper employs a the variable returns to scale DEA
model to take account of risky assets and estimate the operating
efficiencies for the 21 domestic listed securities firms during the
period 2005-2009. Evidence is found that on average the brokerage
securities firms- operating efficiencies are better than integrated
securities firms. Evidence is also found that the technical inefficiency
from inappropriate management constitutes the main source of the
operating inefficiency for both types of securities firms. Moreover, the
scale economies prevail in brokerage and integrated securities firms,
in other words, which exhibit the characteristic of increasing returns to
scale.
Abstract: This paper presents a comparison of metaheuristic
algorithms, Genetic Algorithm (GA) and Ant Colony Optimization
(ACO), in producing freeman chain code (FCC). The main problem
in representing characters using FCC is the length of the FCC
depends on the starting points. Isolated characters, especially the
upper-case characters, usually have branches that make the traversing
process difficult. The study in FCC construction using one
continuous route has not been widely explored. This is our
motivation to use the population-based metaheuristics. The
experimental result shows that the route length using GA is better
than ACO, however, ACO is better in computation time than GA.
Abstract: In order to achieve better road utilization and traffic
efficiency, there is an urgent need for a travel information delivery
mechanism to assist the drivers in making better decisions in the
emerging intelligent transportation system applications. In this paper,
we propose a relayed multicast scheme under heterogeneous networks
for this purpose. In the proposed system, travel information consisting
of summarized traffic conditions, important events, real-time traffic
videos, and local information service contents is formed into layers
and multicasted through an integration of WiMAX infrastructure and
Vehicular Ad hoc Networks (VANET). By the support of adaptive
modulation and coding in WiMAX, the radio resources can be
optimally allocated when performing multicast so as to dynamically
adjust the number of data layers received by the users. In addition to
multicast supported by WiMAX, a knowledge propagation and
information relay scheme by VANET is designed. The experimental
results validate the feasibility and effectiveness of the proposed
scheme.
Abstract: Prediction of highly non linear behavior of suspended
sediment flow in rivers has prime importance in the field of water
resources engineering. In this study the predictive performance of
two Artificial Neural Networks (ANNs) namely, the Radial Basis
Function (RBF) Network and the Multi Layer Feed Forward (MLFF)
Network have been compared. Time series data of daily suspended
sediment discharge and water discharge at Pari River was used for
training and testing the networks. A number of statistical parameters
i.e. root mean square error (RMSE), mean absolute error (MAE),
coefficient of efficiency (CE) and coefficient of determination (R2)
were used for performance evaluation of the models. Both the models
produced satisfactory results and showed a good agreement between
the predicted and observed data. The RBF network model provided
slightly better results than the MLFF network model in predicting
suspended sediment discharge.
Abstract: In this paper, the effect of addition the dune sand powder (DSP) on development of compressive strength and hydration of cement pastes was investigated as a function of water/binder ratio, was varied, on the one hand, the percentage of DSP and on the other, the fineness of DSP. In order to understand better the pozzolanic effect of dune sand powder in cement pastes, we followed the mixtures hydration (50% Pure Lime + 50% DSP) by X-ray diffraction. These mixtures the pastes present a hydraulic setting which is due to the formation of a C-S-H phase (calcium silicate hydrate). The latter is semi-crystallized. This study is a simplified approach to that of the mixtures (80% ordinary Portland cement + 20% DSP), in which the main reaction is the fixing of the lime coming from the cement hydration in the presence of DSP, to form calcium silicate hydrate semi-crystallized of second generation. The results proved that up to (20% DSP) as Portland cement replacement could be used with a fineness of 4000 cm²/g without affecting adversely the compressive strength. After 28 days, the compressive strength at 5, 10 and 15% DSP is superior to Portland cement, with an optimum effect for a percentage of the order of 5% to 10% irrespective of the w/b ratio and fineness of DSP.
Abstract: The counting process of cell colonies is always a long
and laborious process that is dependent on the judgment and ability
of the operator. The judgment of the operator in counting can vary in
relation to fatigue. Moreover, since this activity is time consuming it
can limit the usable number of dishes for each experiment. For these
purposes, it is necessary that an automatic system of cell colony
counting is used. This article introduces a new automatic system of
counting based on the elaboration of the digital images of cellular
colonies grown on petri dishes. This system is mainly based on the
algorithms of region-growing for the recognition of the regions of
interest (ROI) in the image and a Sanger neural net for the
characterization of such regions. The better final classification is
supplied from a Feed-Forward Neural Net (FF-NN) and confronted
with the K-Nearest Neighbour (K-NN) and a Linear Discriminative
Function (LDF). The preliminary results are shown.
Abstract: This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: On the basis of the linearized Phillips-Herffron model of a single-machine power system, a novel method for designing unified power flow controller (UPFC) based output feedback controller is presented. The design problem of output feedback controller for UPFC is formulated as an optimization problem according to with the time domain-based objective function which is solved by iteration particle swarm optimization (IPSO) that has a strong ability to find the most optimistic results. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results prove the effectiveness and robustness of the proposed method in terms of a high performance power system. The simulation study shows that the designed controller by Iteration PSO performs better than Classical PSO in finding the solution.
Abstract: University websites are considered as one of the brand primary touch points for multiple stakeholders, but most of them did not have great designs to create favorable impressions. Some of the elements that web designers should carefully consider are the appearance, the content, the functionality, usability and search engine optimization. However, priority should be placed on website simplicity and negative space. In terms of content, previous research suggests that universities should include reputation, learning environment, graduate career prospects, image destination, cultural integration, and virtual tour on their websites. The study examines how top 200 world ranking science and technology-based universities present their brands online and whether the websites capture the content dimensions. Content analysis of the websites revealed that the top ranking universities captured these dimensions at varying degree. Besides, the UK-based university had better priority on website simplicity and negative space compared to the Malaysian-based university.
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.