Abstract: AmI proposes a new way of thinking about computers, which follows the ideas of the Ubiquitous Computing vision of Mark Weiser. In these, there is what is known as a Disappearing Computer Initiative, with users immersed in intelligent environments. Hence, technologies need to be adapted so that they are capable of replacing the traditional inputs to the system by embedding these in every-day artifacts. In this work, we present an approach, which uses Radiofrequency Identification (RFID) and Near Field Communication (NFC) technologies. In the latter, a new form of interaction appears by contact. We compare both technologies by analyzing their requirements and advantages. In addition, we propose using a combination of RFID and NFC.
Abstract: The introduction of a multitude of new and interactive
e-commerce information technology (IT) artifacts has impacted
adoption research. Rather than solely functioning as productivity
tools, new IT artifacts assume the roles of interaction mediators and
social actors. This paper describes the varying roles assumed by IT
artifacts, and proposes and distinguishes between four distinct foci of
how the artifacts are evaluated. It further proposes a theoretical
model that maps the different views of IT artifacts to four distinct
types of evaluations.
Abstract: The need to merge software artifacts seems inherent
to modern software development. Distribution of development over
several teams and breaking tasks into smaller, more manageable
pieces are an effective means to deal with the kind of complexity. In
each case, the separately developed artifacts need to be assembled as
efficiently as possible into a consistent whole in which the parts still
function as described. In addition, earlier changes are introduced into
the life cycle and easier is their management by designers.
Interaction-based specifications such as UML sequence diagrams
have been found effective in this regard. As a result, sequence
diagrams can be used not only for capturing system behaviors but
also for merging changes in order to create a new version. The
objective of this paper is to suggest a new approach to deal with the
problem of software merging at the level of sequence diagrams by
using the concept of dependence analysis that captures, formally, all
mapping, and differences between elements of sequence diagrams
and serves as a key concept to create a new version of sequence
diagram.
Abstract: This study aims to increase understanding of the
transition of business models in servitization. The significance of
service in all business has increased dramatically during the past
decades. Service-dominant logic (SDL) describes this change in the
economy and questions the goods-dominant logic on which business
has primarily been based in the past. A business model canvas is one
of the most cited and used tools in defining end developing business
models. The starting point of this paper lies in the notion that the
traditional business model canvas is inherently goods-oriented and
best suits for product-based business. However, the basic differences
between goods and services necessitate changes in business model
representations when proceeding in servitization. Therefore, new
knowledge is needed on how the conception of business model and
the business model canvas as its representation should be altered in
servitized firms in order to better serve business developers and interfirm
co-creation. That is to say, compared to products, services are
intangible and they are co-produced between the supplier and the
customer. Value is always co-created in interaction between a
supplier and a customer, and customer experience primarily depends
on how well the interaction succeeds between the actors. The role of
service experience is even stronger in service business compared to
product business, as services are co-produced with the customer. This paper provides business model developers with a service
business model canvas, which takes into account the intangible,
interactive, and relational nature of service. The study employs a
design science approach that contributes to theory development via
design artifacts. This study utilizes qualitative data gathered in
workshops with ten companies from various industries. In particular,
key differences between Goods-dominant logic (GDL) and SDLbased
business models are identified when an industrial firm
proceeds in servitization. As the result of the study, an updated version of the business
model canvas is provided based on service-dominant logic. The
service business model canvas ensures a stronger customer focus and
includes aspects salient for services, such as interaction between
companies, service co-production, and customer experience. It can be
used for the analysis and development of a current service business
model of a company or for designing a new business model. It
facilitates customer-focused new service design and service
development. It aids in the identification of development needs, and
facilitates the creation of a common view of the business model.
Therefore, the service business model canvas can be regarded as a
boundary object, which facilitates the creation of a common
understanding of the business model between several actors involved.
The study contributes to the business model and service business
development disciplines by providing a managerial tool for
practitioners in service development. It also provides research insight
into how servitization challenges companies’ business models.
Abstract: Software fault prediction models are created by using
the source code, processed metrics from the same or previous version
of code and related fault data. Some company do not store and keep
track of all artifacts which are required for software fault prediction.
To construct fault prediction model for such company, the training
data from the other projects can be one potential solution. Earlier we
predicted the fault the less cost it requires to correct. The training
data consists of metrics data and related fault data at function/module
level. This paper investigates fault predictions at early stage using the
cross-project data focusing on the design metrics. In this study,
empirical analysis is carried out to validate design metrics for cross
project fault prediction. The machine learning techniques used for
evaluation is Naïve Bayes. The design phase metrics of other projects
can be used as initial guideline for the projects where no previous
fault data is available. We analyze seven datasets from NASA
Metrics Data Program which offer design as well as code metrics.
Overall, the results of cross project is comparable to the within
company data learning.
Abstract: High density electrical prospecting has been widely
used in groundwater investigation, civil engineering and
environmental survey. For efficient inversion, the forward modeling
routine, sensitivity calculation, and inversion algorithm must be
efficient. This paper attempts to provide a brief summary of the past
and ongoing developments of the method. It includes reviews of the
procedures used for data acquisition, processing and inversion of
electrical resistivity data based on compilation of academic literature.
In recent times there had been a significant evolution in field survey
designs and data inversion techniques for the resistivity method. In
general 2-D inversion for resistivity data is carried out using the
linearized least-square method with the local optimization technique
.Multi-electrode and multi-channel systems have made it possible to
conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve
complex geological structures that were not possible with traditional
1-D surveys. 3-D surveys play an increasingly important role in very
complex areas where 2-D models suffer from artifacts due to off-line
structures. Continued developments in computation technology, as
well as fast data inversion techniques and software, have made it
possible to use optimization techniques to obtain model parameters to
a higher accuracy. A brief discussion on the limitations of the
electrical resistivity method has also been presented.
Abstract: The focal aspire of e-Government (eGovt) is to offer
citizen-centered service delivery. Accordingly, the citizenry
consumes services from multiple government agencies through
national portal. Thus, eGovt is an enterprise with the primary
business motive of transparent, efficient and effective public services
to its citizenry and its logical structure is the eGovernment Enterprise
Architecture (eGEA). Since eGovt is IT oriented multifaceted
service-centric system, EA doesn’t do much on an automated
enterprise other than the business artifacts. Service-Oriented
Architecture (SOA) manifestation led some governments to pertain
this in their eGovts, but it limits the source of business artifacts. The
concurrent use of EA and SOA in eGovt executes interoperability and
integration and leads to Service-Oriented e-Government Enterprise
(SOeGE). Consequently, agile eGovt system becomes a reality. As an
IT perspective eGovt comprises of centralized public service artifacts
with the existing application logics belong to various departments at
central, state and local level. The eGovt is renovating to SOeGE by
apply the Service-Orientation (SO) principles in the entire system.
This paper explores IT perspective of SOeGE in India which
encompasses the public service models and illustrated with a case
study the Passport service of India.
Abstract: The phased-array ultrasound transducer types are
utilities for medical ultrasonography as well as optical imaging.
However, their discontinuity characteristic limits the applications due
to the artifacts contaminated into the reconstructed images. Because
of the effects of the ultrasound pressure field pattern to the echo
ultrasonic waves as well as the optical modulated signal, the side
lobes of the focused ultrasound beam induced by discontinuity of the
phased-array ultrasound transducer might the reason of the artifacts.
In this paper, a simple method in approach of numerical simulation
was used to investigate the limitation of discontinuity of the elements
in phased-array ultrasound transducer and their effects to the
ultrasound pressure field. Take into account the change of ultrasound
pressure field patterns in the conditions of variation of the pitches
between elements of the phased-array ultrasound transducer, the
appropriated parameters for phased-array ultrasound transducer
design were asserted quantitatively.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when digital image is resized on a diagnostic monitor. In this paper we propose an automated grid artifactsdetection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches.
Abstract: Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination
and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.
Abstract: The software evolution control requires a deep
understanding of the changes and their impact on different system
heterogeneous artifacts. And an understanding of descriptive
knowledge of the developed software artifacts is a prerequisite
condition for the success of the evolutionary process.
The implementation of an evolutionary process is to make changes
more or less important to many heterogeneous software artifacts such
as source code, analysis and design models, unit testing, XML
deployment descriptors, user guides, and others. These changes can
be a source of degradation in functional, qualitative or behavioral
terms of modified software. Hence the need for a unified approach
for extraction and representation of different heterogeneous artifacts
in order to ensure a unified and detailed description of heterogeneous
software artifacts, exploitable by several software tools and allowing
to responsible for the evolution of carry out the reasoning change
concerned.
Abstract: Petroglyphs, stone sculptures, burial mounds, and
other memorial religious structures are ancient artifacts which find
reflection in contemporary world culture, including the culture of
Kazakhstan. In this article, the problem of the influence of ancient
artifacts on contemporary culture is researched, using as an example
Kazakhstan-s sculpture and painting. The practice of creating
petroglyphs, stone sculptures, and memorial religious structures was
closely connected to all fields of human existence, which fostered the
formation of and became an inseparable part of a traditional
worldview. The ancient roots of Saka-Sythian and Turkic nomadic
culture have been studied, and integrated into the foundations of the
contemporary art of Kazakhstan. The study of the ancient cultural
heritage of Kazakhstan by contemporary artists, sculptors and
architects, as well as the influence of European art and cultures on the
art of Kazakhstan are furthering the development of a new national
art.
Abstract: Fishing has always been an essential component of
the Polynesians- life. Fishhooks, mostly in pearl shell, found during
archaeological excavations are the artifacts related to this activity the
most numerous. Thanks to them, we try to reconstruct the ancient
techniques of resources exploitation, inside the lagoons and offshore.
They can also be used as chronological and cultural indicators. The
shapes and dimensions of these artifacts allow comparisons and
classifications used in both functional approach and chrono-cultural
perspective. Hence it is very important for the ethno-archaeologists
to dispose of reliable methods and standardized measurement of
these artifacts. Such a reliable objective and standardized method
have been previously proposed. But this method cannot be envisaged
manually because of the very important time required to measure
each fishhook manually and the quantity of fishhooks to measure
(many hundreds). We propose in this paper a detailed acquisition
protocol of fishhooks and an automation of every step of this method.
We also provide some experimental results obtained on the fishhooks
coming from three archaeological excavations sites.
Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: Social ideology, cultural values and principles shaping environment are inferred by environment and structural characteristics of construction site. In other words, this inference manifestation also indicates ideology and culture of its foundation and also applies its principles and values and somehow plays an important role in Cultural Revolution. All human behaviors and artifacts are affected and being influenced by culture. Culture is not abstract concept, it is a spiritual domain that an individual and society grow and develop in it. Social behaviors are affected by environmental comprehension, so the architecture work influences on its audience and it is the environment that fosters social behaviors. Indeed, sustainable architecture should be considered as background of culture for establishing optimal sustainable culture. Since unidentified architecture roots in cultural non identity and abnormalities, so the society possesses identity characteristics and life and as a consequence, the society and architecture are changed by transformation of life style. This article aims to investigate the interaction of architecture, society, environment and sustainable architecture formation in its cultural basis and analyzes the results approaching behavior and sustainable culture in recent era.
Abstract: Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.
Abstract: Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Abstract: Cardiac pulse-related artifacts in the EEG recorded
simultaneously with fMRI are complex and highly variable. Their
effective removal is an unsolved problem. Our aim is to develop an
adaptive removal algorithm based on the matching pursuit (MP)
technique and to compare it to established methods using a visual
evoked potential (VEP). We recorded the VEP inside the static
magnetic field of an MR scanner (with artifacts) as well as in an
electrically shielded room (artifact free). The MP-based artifact
removal outperformed average artifact subtraction (AAS) and
optimal basis set removal (OBS) in terms of restoring the EEG field
map topography of the VEP. Subsequently, a dipole model was fitted
to the VEP under each condition using a realistic boundary element
head model. The source location of the VEP recorded inside the MR
scanner was closest to that of the artifact free VEP after cleaning
with the MP-based algorithm as well as with AAS. While none of the
tested algorithms offered complete removal, MP showed promising
results due to its ability to adapt to variations of latency, frequency
and amplitude of individual artifact occurrences while still utilizing a
common template.
Abstract: The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.