Abstract: Titanium nitride (TiN) has been synthesized using the
sheet plasma negative ion source (SPNIS). The parameters used for
its effective synthesis has been determined from previous
experiments and studies. In this study, further enhancement of the
deposition rate of TiN synthesis and advancement of the SPNIS
operation is presented. This is primarily achieved by the addition of
Sm-Co permanent magnets and a modification of the configuration in
the TiN deposition process. The magnetic enhancement is aimed at
optimizing the sputtering rate and the sputtering yield of the process.
The Sm-Co permanent magnets are placed below the Ti target for
better sputtering by argon. The Ti target is biased from –250V to –
350V and is sputtered by Ar plasma produced at discharge current of
2.5–4A and discharge potential of 60–90V. Steel substrates of
dimensions 20x20x0.5mm3 were prepared with N2:Ar volumetric
ratios of 1:3, 1:5 and 1:10. Ocular inspection of samples exhibit
bright gold color associated with TiN. XRD characterization
confirmed the effective TiN synthesis as all samples exhibit the (200)
and (311) peaks of TiN and the non-stoichiometric Ti2N (220) facet.
Cross-sectional SEM results showed increase in the TiN deposition
rate of up to 0.35μm/min. This doubles what was previously obtained
[1]. Scanning electron micrograph results give a comparative
morphological picture of the samples. Vickers hardness results gave
the largest hardness value of 21.094GPa.
Abstract: Because of importance of energy, optimization of
power generation systems is necessary. Gas turbine cycles are
suitable manner for fast power generation, but their efficiency is
partly low. In order to achieving higher efficiencies, some
propositions are preferred such as recovery of heat from exhaust
gases in a regenerator, utilization of intercooler in a multistage
compressor, steam injection to combustion chamber and etc.
However thermodynamic optimization of gas turbine cycle, even
with above components, is necessary. In this article multi-objective
genetic algorithms are employed for Pareto approach optimization of
Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective
optimization a number of conflicting objective functions
are to be optimized simultaneously. The important objective
functions that have been considered for optimization are entropy
generation of RIGT cycle (Ns) derives using Exergy Analysis and
Gouy-Stodola theorem, thermal efficiency and the net output power
of RIGT Cycle. These objectives are usually conflicting with each
other. The design variables consist of thermodynamic parameters
such as compressor pressure ratio (Rp), excess air in combustion
(EA), turbine inlet temperature (TIT) and inlet air temperature (T0).
At the first stage single objective optimization has been investigated
and the method of Non-dominated Sorting Genetic Algorithm
(NSGA-II) has been used for multi-objective optimization.
Optimization procedures are performed for two and three objective
functions and the results are compared for RIGT Cycle. In order to
investigate the optimal thermodynamic behavior of two objectives,
different set, each including two objectives of output parameters, are
considered individually. For each set Pareto front are depicted. The
sets of selected decision variables based on this Pareto front, will
cause the best possible combination of corresponding objective
functions. There is no superiority for the points on the Pareto front
figure, but they are superior to any other point. In the case of three
objective optimization the results are given in tables.
Abstract: For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Abstract: A two-dimensional moving mesh algorithm is developed to simulate the general motion of two rotating bodies with relative translational motion. The grid includes a background grid and two sets of grids around the moving bodies. With this grid arrangement rotational and translational motions of two bodies are handled separately, with no complications. Inter-grid boundaries are determined based on their distances from two bodies. In this method, the overset concept is applied to hybrid grid, and flow variables are interpolated using a simple stencil. To evaluate this moving mesh algorithm unsteady Euler flow is solved for different cases using dual-time method of Jameson. Numerical results show excellent agreement with experimental data and other numerical results. To demonstrate the capability of present algorithm for accurate solution of flow fields around moving bodies, some benchmark problems have been defined in this paper.
Abstract: A new voltage-mode triple-input single-output multifunction filter using only two current conveyors is presented. The proposed filter which possesses three inputs and single-output can generate all biquadratic filtering functions at the output terminal by selecting different input signal combinations. The validity of the proposed filter is verified through PSPICE simulations.
Abstract: This study examined the underlying dimensions of
brand equity in the chocolate industry. For this purpose, researchers
developed a model to identify which factors are influential in
building brand equity. The second purpose was to assess brand
loyalty and brand images mediating effect between brand attitude,
brand personality, brand association with brand equity. The study
employed structural equation modeling to investigate the causal
relationships between the dimensions of brand equity and brand
equity itself. It specifically measured the way in which consumers’
perceptions of the dimensions of brand equity affected the overall
brand equity evaluations. Data were collected from a sample of
consumers of chocolate industry in Iran. The results of this empirical
study indicate that brand loyalty and brand image are important
components of brand equity in this industry. Moreover, the role of
brand loyalty and brand image as mediating factors in the intention of
brand equity are supported. The principal contribution of the present
research is that it provides empirical evidence of the
multidimensionality of consumer based brand equity, supporting
Aaker´s and Keller´s conceptualization of brand equity. The present
research also enriched brand equity building by incorporating the
brand personality and brand image, as recommended by previous
researchers. Moreover, creating the brand equity index in chocolate
industry of Iran particularly is novel.
Abstract: This paper estimates the economic values of
household preference for enhanced solid waste disposal services in
Malaysia. The contingent valuation (CV) method estimates an
average additional monthly willingness-to-pay (WTP) in solid waste
management charges of Ôé¼0.77 to 0.80 for improved waste disposal
services quality. The finding of a slightly higher WTP from the
generic CV question than that of label-specific, further reveals a
higher WTP for sanitary landfill, at Ôé¼0.90, than incineration, at Ôé¼0.63.
This suggests that sanitary landfill is a more preferred alternative.
The logistic regression estimation procedure reveals that household-s
concern of where their rubbish is disposed, age, ownership of house,
household income and format of CV question are significant factors
in influencing WTP.
Abstract: The purpose of the present study is the investigation
of the relationship between knowledge management and enabling
managers based on achieving proper function. This research is
descriptive and investigative. The sample includes all male and
female high school managers of first and second regions of Urmia
including 98 school and accordingly 98 managers. The instrument
applied was a questionnaire. To sum up, there is a statistically
significant relationship between knowledge management and
empowering managers. In the end, several suggestions are provided.
Abstract: In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: Environmental awareness and the recent
environmental policies have forced many electric utilities to
restructure their operational practices to account for their emission
impacts. One way to accomplish this is by reformulating the
traditional economic dispatch problem such that emission effects are
included in the mathematical model. This paper presents a Particle
Swarm Optimization (PSO) algorithm to solve the Economic-
Emission Dispatch problem (EED) which gained recent attention due
to the deregulation of the power industry and strict environmental
regulations. The problem is formulated as a multi-objective one with
two competing functions, namely economic cost and emission
functions, subject to different constraints. The inequality constraints
considered are the generating unit capacity limits while the equality
constraint is generation-demand balance. A novel equality constraint
handling mechanism is proposed in this paper. PSO algorithm is
tested on a 30-bus standard test system. Results obtained show that
PSO algorithm has a great potential in handling multi-objective
optimization problems and is capable of capturing Pareto optimal
solution set under different loading conditions.
Abstract: With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.
Abstract: This article is an extension and a practical application
approach of Wheeler-s NEBIC theory (Net Enabled Business
Innovation Cycle). NEBIC theory is a new approach in IS research
and can be used for dynamic environment related to new technology.
Firms can follow the market changes rapidly with support of the IT
resources. Flexible firms adapt their market strategies, and respond
more quickly to customers changing behaviors. When every leading
firm in an industry has access to the same IT resources, the way that
these IT resources are managed will determine the competitive
advantages or disadvantages of firm. From Dynamic Capabilities
Perspective and from newly introduced NEBIC theory by Wheeler,
we know that only IT resources cannot deliver customer value but
good configuration of those resources can guarantee customer value
by choosing the right emerging technology, grasping the economic
opportunities through business innovation and growth. We found
evidences in literature that SOA (Service Oriented Architecture) is a
promising emerging technology which can deliver the desired
economic opportunity through modularity, flexibility and loosecoupling.
SOA can also help firms to connect in network which can
open a new window of opportunity to collaborate in innovation and
right kind of outsourcing
Abstract: This study explores how the mechanics of learning
paves the way to engineering innovation. Theories related to learning
in the new product/service innovation are reviewed from an
organizational perspective, behavioral perspective, and engineering
perspective. From this, an engineering team-s external interactions
for knowledge brokering and internal composition for skill balance
are examined from a learning and innovation viewpoints. As a result,
an integrated learning model is developed by reconciling the
theoretical perspectives as well as developing propositions that
emphasize the centrality of learning, and its drivers, in the
engineering product/service development. The paper also provides a
review and partial validation of the propositions using the results of a
previously published field study in the aerospace industry.
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Abstract: Imperfect transmission conditions modeling a thin reactive 2D interphases layer between two dissimilar bonded strips have been extracted. In this paper, the soundness of these transmission conditions for heat conduction problems are examined by the finite element method for a strong temperature-dependent source or sink and non-monotonic temperature distributions around the faces..
Abstract: In this paper the direct kinematic model of a multiple
applications three degrees of freedom industrial manipulator, was
developed using the homogeneous transformation matrices and the
Denavit - Hartenberg parameters, likewise the inverse kinematic
model was developed using the same method, verifying that in the
workload border the inverse kinematic presents considerable errors,
therefore a genetic algorithm was implemented to optimize the model
improving greatly the efficiency of the model.
Abstract: E-government projects have potential for greater efficiency and effectiveness of government operations. For this reason, many developing countries governments have invested heavily in this agenda and an increasing number of e-government projects are being implemented. However, there is a lack of clear case material, which describes the potentialities and consequence experienced by organizations trying to manage with this change. The Ministry of State for Administrative Development (MSAD) is the organization responsible for the e-Government program in Egypt since early 2004. This paper presents a case study of the process of admission to public universities and institutions in Egypt which is led by MSAD. Underlining the key benefits resulting from the initiative, explaining the strategies and the development steps used to implement it, and highlighting the main obstacles encountered and how they were overcome will help repeat the experience in other useful e-government projects.
Abstract: Cell formation is the first step in the design of cellular
manufacturing systems. In this study, a general purpose
computational scheme employing a hybrid tabu search algorithm as
the core is proposed to solve the cell formation problem and its
variants. In the proposed scheme, great flexibilities are left to the
users. The core solution searching algorithm embedded in the scheme
can be easily changed to any other meta-heuristic algorithms, such as
the simulated annealing, genetic algorithm, etc., based on the
characteristics of the problems to be solved or the preferences the
users might have. In addition, several counters are designed to control
the timing of conducting intensified solution searching and diversified
solution searching strategies interactively.
Abstract: Structural redundancy is an interesting point in
seismic design of structures. Initially, the structural redundancy is
described as indeterminate degree of a system. Although many definitions are presented for redundancy in structures, recently the
definition of structural redundancy has been related to the configuration of structural system and the number of lateral load
transferring directions in the structure. The steel frames with infill walls are general systems in the constructing of usual residential buildings in some countries. It is
obviously declared that the performance of structures will be affected by adding masonry infill walls. In order to investigate the effect of
infill walls on the redundancy of the steel frame which constructed
with masonry walls, the components of redundancy including redundancy variation index, redundancy strength index and
redundancy response modification factor were extracted for the
frames with masonry infills. Several steel frames with typical storey number and various numbers of bays were designed and considered.
The redundancy of frames with and without infill walls was evaluated by proposed method. The results showed the presence of infill causes increase of redundancy.