Abstract: The seismic response of steel shear wall system considering nonlinearity effects using finite element method is investigated in this paper. The non-linear finite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of finite element code. A numerical model based on the finite element method for the seismic analysis of shear wall is presented with developing of finite element code in this research. To develop the finite element code, the standard Galerkin weighted residual formulation is used. Two-dimensional plane stress model and total Lagrangian formulation was carried out to present the shear wall response and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The presented model in this paper can be developed for analysis of civil engineering structures with different material behavior and complicated geometry.
Abstract: Data Envelopment Analysis (DEA) is one of the most
widely used technique for evaluating the relative efficiency of a set
of homogeneous decision making units. Traditionally, it assumes that
input and output variables are known in advance, ignoring the critical
issue of data uncertainty. In this paper, we deal with the problem
of efficiency evaluation under uncertain conditions by adopting the
general framework of the stochastic programming. We assume that
output parameters are represented by discretely distributed random
variables and we propose two different models defined according to a
neutral and risk-averse perspective. The models have been validated
by considering a real case study concerning the evaluation of the
technical efficiency of a sample of individual firms operating in
the Italian leather manufacturing industry. Our findings show the
validity of the proposed approach as ex-ante evaluation technique
by providing the decision maker with useful insights depending on
his risk aversion degree.
Abstract: Despite extensive study on wireless sensor network
security, defending internal attacks and finding abnormal behaviour
of the sensor are still difficult and unsolved task. The conventional
cryptographic technique does not give the robust security or detection
process to save the network from internal attacker that cause by
abnormal behavior. The insider attacker or abnormally behaved
sensor identificationand location detection framework using false
massage detection and Time difference of Arrival (TDoA) is
presented in this paper. It has been shown that the new framework
can efficiently identify and detect the insider attacker location so that
the attacker can be reprogrammed or subside from the network to
save from internal attack.
Abstract: Service innovations are central concerns in fast
changing environment. Due to the fitness in customer demands and
advances in information technologies (IT) in service management, an
expanded conceptualization of e-service innovation is required.
Specially, innovation practices have become increasingly more
challenging, driving managers to employ a different open innovation
model to maintain competitive advantages. At the same time, firms
need to interact with external and internal customers in innovative
environments, like the open innovation networks, to co-create values.
Based on these issues, an important conceptual framework of e-service
innovation is developed. This paper aims to examine the contributing
factors on e-service innovation and firm performance, including
financial and non-financial aspects. The study concludes by showing
how e-service innovation will play a significant role in growing the
overall values of the firm. The discussion and conclusion will lead to a
stronger understanding of e-service innovation and co-creating values
with customers within open innovation networks.
Abstract: Digital watermarking is one of the techniques for
copyright protection. In this paper, a normalization-based robust
image watermarking scheme which encompasses singular value
decomposition (SVD) and discrete cosine transform (DCT)
techniques is proposed. For the proposed scheme, the host image is
first normalized to a standard form and divided into non-overlapping
image blocks. SVD is applied to each block. By concatenating the
first singular values (SV) of adjacent blocks of the normalized image,
a SV block is obtained. DCT is then carried out on the SV blocks to
produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency
band of a SVD-DCT block by imposing a particular
relationship between two pseudo-randomly selected DCT
coefficients. An adaptive frequency mask is used to adjust local
watermark embedding strength. Watermark extraction involves
mainly the inverse process. The watermark extracting method is blind
and efficient. Experimental results show that the quality degradation
of watermarked image caused by the embedded watermark is visually
transparent. Results also show that the proposed scheme is robust
against various image processing operations and geometric attacks.
Abstract: Document image processing has become an
increasingly important technology in the automation of office
documentation tasks. During document scanning, skew is inevitably
introduced into the incoming document image. Since the algorithm
for layout analysis and character recognition are generally very
sensitive to the page skew. Hence, skew detection and correction in
document images are the critical steps before layout analysis. In this
paper, a novel skew detection method is presented for binary
document images. The method considered the some selected
characters of the text which may be subjected to thinning and Hough
transform to estimate skew angle accurately. Several experiments
have been conducted on various types of documents such as
documents containing English Documents, Journals, Text-Book,
Different Languages and Document with different fonts, Documents
with different resolutions, to reveal the robustness of the proposed
method. The experimental results revealed that the proposed method
is accurate compared to the results of well-known existing methods.
Abstract: This paper discusses E-government, in particular the
challenges that face adoption in Saudi Arabia. E-government can be
defined based on an existing set of requirements. In this research we
define E-government as a matrix of stakeholders: governments to
governments, governments to business and governments to citizens,
using information and communications technology to deliver and
consume services. E-government has been implemented for a
considerable time in developed countries. However, E-government
services still face many challenges in their implementation and
general adoption in many countries including Saudi Arabia. It has
been noted that the introduction of E-government is a major
challenge facing the government of Saudi Arabia, due to possible
concerns raised by its citizens. In addition, the literature review and
the discussion identify the influential factors that affect the citizens’
intention to adopt E-government services in Saudi Arabia.
Consequently, these factors have been defined and categorized
followed by an exploratory study to examine the importance of these
factors. Therefore, this research has identified factors that determine
if the citizen will adopt E-government services and thereby aiding
governments in accessing what is required to increase adoption.
Abstract: This paper discusses the causal explanation capability
of QRIOM, a tool aimed at supporting learning of organic chemistry
reactions. The development of the tool is based on the hybrid use of
Qualitative Reasoning (QR) technique and Qualitative Process
Theory (QPT) ontology. Our simulation combines symbolic,
qualitative description of relations with quantity analysis to generate
causal graphs. The pedagogy embedded in the simulator is to both
simulate and explain organic reactions. Qualitative reasoning through
a causal chain will be presented to explain the overall changes made
on the substrate; from initial substrate until the production of final
outputs. Several uses of the QPT modeling constructs in supporting
behavioral and causal explanation during run-time will also be
demonstrated. Explaining organic reactions through causal graph
trace can help improve the reasoning ability of learners in that their
conceptual understanding of the subject is nurtured.
Abstract: Structural representation and technology mapping of
a Boolean function is an important problem in the design of nonregenerative
digital logic circuits (also called combinational logic
circuits). Library aware function manipulation offers a solution to
this problem. Compact multi-level representation of binary networks,
based on simple circuit structures, such as AND-Inverter Graphs
(AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR
Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter
Graphs, Reduced Boolean Circuits [8] does exist in
literature. In this work, we discuss a novel and efficient graph
realization for combinational logic circuits, represented using a
NAND-NOR-Inverter Graph (NNIG), which is composed of only
two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells.
The networks are constructed on the basis of irredundant disjunctive
and conjunctive normal forms, after factoring, comprising terms with
minimum support. Construction of a NNIG for a non-regenerative
function in normal form would be straightforward, whereas for the
complementary phase, it would be developed by considering a virtual
instance of the function. However, the choice of best NNIG for a
given function would be based upon literal count, cell count and
DAG node count of the implementation at the technology
independent stage. In case of a tie, the final decision would be made
after extracting the physical design parameters.
We have considered AIG representation for reduced disjunctive
normal form and the best of OIG/AOG/AOIG for the minimized
conjunctive normal forms. This is necessitated due to the nature of
certain functions, such as Achilles- heel functions. NNIGs are found
to exhibit 3.97% lesser node count compared to AIGs and
OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells
than AIGs and OIG/AOG/AOIGs for the various samples considered.
We compare the power efficiency and delay improvement achieved
by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for
various case studies. In comparison with functionally equivalent,
irredundant and compact AIGs, NNIGs report mean savings in power
and delay of 43.71% and 25.85% respectively, after technology
mapping with a 0.35 micron TSMC CMOS process. For a
comparison with OIG/AOG/AOIGs, NNIGs demonstrate average
savings in power and delay by 47.51% and 24.83%. With respect to
device count needed for implementation with static CMOS logic
style, NNIGs utilize 37.85% and 33.95% lesser transistors than their
AIG and OIG/AOG/AOIG counterparts.
Abstract: User-Centered Design (UCD), Usability Engineering (UE) and Participatory Design (PD) are the common Human- Computer Interaction (HCI) approaches that are practiced in the software development process, focusing towards issues and matters concerning user involvement. It overlooks the organizational perspective of HCI integration within the software development organization. The Management Information Systems (MIS) perspective of HCI takes a managerial and organizational context to view the effectiveness of integrating HCI in the software development process. The Human-Centered Design (HCD) which encompasses all of the human aspects including aesthetic and ergonomic, is claimed as to provide a better approach in strengthening the HCI approaches to strengthen the software development process. In determining the effectiveness of HCD in the software development process, this paper presents the findings of a content analysis of HCI approaches by viewing those approaches as a technology which integrates user requirements, ranging from the top management to other stake holder in the software development process. The findings obtained show that HCD approach is a technology that emphasizes on human, tools and knowledge in strengthening the HCI approaches to strengthen the software development process in the quest to produce a sustainable, usable and useful software product.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: Recently, the RFID (Radio Frequency
Identification) technology attracts the world market attention as
essential technology for ubiquitous environment. The RFID
market has focused on transponders and reader development.
But that concern has shifted to RFID software like as
high-valued e-business applications, RFID middleware and
related development tools. However, due to the high sensitivity
of data and service transaction within the RFID network,
security consideration must be addressed. In order to guarantee
trusted e-business based on RFID technology, we propose a
security enhanced RFID middleware system. Our proposal is
compliant with EPCglobal ALE (Application Level Events),
which is standard interface for middleware and its clients. We
show how to provide strengthened security and trust by
protecting transported data between middleware and its client,
and stored data in middleware. Moreover, we achieve the
identification and service access control against illegal service
abuse. Our system enables secure RFID middleware service
and trusted e-business service.
Abstract: The problem of frequent itemset mining is considered in this paper. One new technique proposed to generate frequent patterns in large databases without time-consuming candidate generation. This technique is based on focusing on transaction instead of concentrating on itemset. This algorithm based on take intersection between one transaction and others transaction and the maximum shared items between transactions computed instead of creating itemset and computing their frequency. With applying real life transactions and some consumption is taken from real life data, the significant efficiency acquire from databases in generation association rules mining.
Abstract: Nowadays, quick technological changes force companies
to develop innovative products in an increasingly competitive
environment. Therefore, how to enhance the time of new product
development is very important. This design problem often lacks
the exact formula for getting it, and highly depends upon human
designers- past experiences. For these reasons, in this work, a Casebased
reasoning (CBR) system to assist in new product development
is proposed. When a case is recovered from the case base, the system
will take into account not only the attribute-s specific value and
how important it is. It will also take into account if the attribute
has a positive influence over the product development. Hence the
manufacturing time will be improved. This information will be
introduced as a new concept called “adaptability". An application to
this method for hearing instrument new design illustrates the proposed
approach.
Abstract: As embedded and portable systems were emerged power consumption of circuits had been major challenge. On the other hand latency as determines frequency of circuits is also vital task. Therefore, trade off between both of them will be desirable. Modulo 2n+1 adders are important part of the residue number system (RNS) based arithmetic units with the interesting moduli set (2n-1,2n, 2n+1). In this manuscript we have introduced novel binary representation to the design of modulo 2n+1 adder. VLSI realization of proposed architecture under 180 nm full static CMOS technology reveals its superiority in terms of area, power consumption and power-delay product (PDP) against several peer existing structures.
Abstract: A comprehensive discussion of feasible strategies for sustainable energy supply is urgently needed to achieve a turnaround of the current energy situation. The necessary fundamentals required for the development of a long term energy vision are lacking to a great extent due to the absence of reasonable long term scenarios that fulfill the requirements of climate protection and sustainable energy use. The contribution of the study is based on a search for sustainable energy paths in the long run for Austria. The analysis makes use of secondary data predominantly. The measures developed to avoid CO2 emissions and other ecological risk factors vary to a great extent among all economic sectors. This is shown by the calculation of CO2 cost of abatement curves. In this study it is demonstrated that the most effective technical measures with the lowest CO2 abatement costs yield solutions to the current energy problems. Various scenarios are presented concerning the question how the technological and environmental options for a sustainable energy system for Austria could look like in the long run. It is shown how sustainable energy can be supplied even with today-s technological knowledge and options available. The scenarios developed include an evaluation of the economic costs and ecological impacts. The results are not only applicable to Austria but demonstrate feasible and cost efficient ways towards a sustainable future.
Abstract: Sediment loads transfer in hydraulic installations and their consequences for the O&M of modern canal systems is emerging as one of the most important considerations in hydraulic engineering projects apriticularly those which are inteded to feed the irrigation and draiange schemes of large command areas such as the Dez and Mogahn in Iran.. The aim of this paper is to investigate the applicability of the vortex tube as a viable means of extracting sediment loads entering the canal systems in general and the water inatke structures in particulars. The Western conveyance canal of the Dez Diversion weir which feeds the Karkheh Flood Plain in Sothwestern Dezful has been used as the case study using the data from the Dastmashan Hydrometric Station. The SHARC software has been used as an analytical framework to interprete the data. Results show that given the grain size D50 and the canal turbulence the adaption length from the beginning of the canal and after the diversion dam is estimated at 477 m, a point which is suitable for laying the vortex tube.
Abstract: Optical flow is a research topic of interest for many
years. It has, until recently, been largely inapplicable to real-time
applications due to its computationally expensive nature. This paper
presents a new reliable flow technique which is combined with a
motion detection algorithm, from stationary camera image streams,
to allow flow-based analyses of moving entities, such as rigidity, in
real-time. The combination of the optical flow analysis with motion
detection technique greatly reduces the expensive computation of
flow vectors as compared with standard approaches, rendering the
method to be applicable in real-time implementation. This paper
describes also the hardware implementation of a proposed pipelined
system to estimate the flow vectors from image sequences in real
time. This design can process 768 x 576 images at a very high frame
rate that reaches to 156 fps in a single low cost FPGA chip, which is
adequate for most real-time vision applications.
Abstract: In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Abstract: The purpose of this study is to explore the impacts of
computer games on the mathematics instruction. First, the research
designed and implemented the web-based games according to the
content of existing textbook. And the researcher collected and
analyzed the information related to the mathematics instruction
integrating the computer games. In this study, the researcher focused
on the learning motivation of mathematics, mathematics achievement,
and pupil-teacher interactions in classroom. The results showed that
students under instruction integrating computer games significantly
improved in motivation and achievement. The teacher tended to use
less direct teaching and provide more time for student-s active
learning.