Abstract: The increasing interest on processing data created by
sensor networks has evolved into approaches to implement sensor
networks as databases. The aggregation operator, which calculates a
value from a large group of data such as computing averages or sums,
etc. is an essential function that needs to be provided when
implementing such sensor network databases. This work proposes to
add the DURING clause into TinySQL to calculate values during a
specific long period and suggests a way to implement the aggregation
service in sensor networks by applying materialized view and
incremental view maintenance techniques that is used in data
warehouses. In sensor networks, data values are passed from child
nodes to parent nodes and an aggregation value is computed at the root
node. As such root nodes need to be memory efficient and low
powered, it becomes a problem to recompute aggregate values from all
past and current data. Therefore, applying incremental view
maintenance techniques can reduce the memory consumption and
support fast computation of aggregate values.
Abstract: According as the Architecture, Engineering and Construction (AEC) Industry projects have grown more complex and larger, the number of utilization of BIM for 3D design and simulation is increasing significantly. Therefore, typical applications of BIM such as clash detection and alternative measures based on 3-dimenstional planning are expanded to process management, cost and quantity management, structural analysis, check for regulation, and various domains for virtual design and construction. Presently, commercial BIM software is operated on single-user environment, so initial cost is so high and the investment may be wasted frequently. Cloud computing that is a next-generation internet technology enables simple internet devices (such as PC, Tablet, Smart phone etc) to use services and resources of BIM software. In this paper, we suggested developing method of the BIM software based on cloud computing environment in order to expand utilization of BIM and reduce cost of BIM software. First, for the benchmarking, we surveyed successful case of BIM and cloud computing. And we analyzed needs and opportunities of BIM and cloud computing in AEC Industry. Finally, we suggested main functions of BIM software based on cloud computing environment and developed a simple prototype of cloud computing BIM software for basic BIM model viewing.
Abstract: System-level design based on high-level abstractions
is becoming increasingly important in hardware and embedded
system design. This paper analyzes meta-design techniques oriented
at developing meta-programs and meta-models for well-understood
domains. Meta-design techniques include meta-programming and
meta-modeling. At the programming level of design process, metadesign
means developing generic components that are usable in a
wider context of application than original domain components. At the
modeling level, meta-design means developing design patterns that
describe general solutions to the common recurring design problems,
and meta-models that describe the relationship between different
types of design models and abstractions. The paper describes and
evaluates the implementation of meta-design in hardware design
domain using object-oriented and meta-programming techniques.
The presented ideas are illustrated with a case study.
Abstract: The purpose of this study was to develop and examine a
Teaching Commitment Scale of Health and Physical Education
(TCS-HPE) for Taiwanese elementary school teachers. First of all,
based on teaching commitment related theory and literatures to
develop a original scale with 40 items, later both stratified random
sampling and cluster sampling were used to sample participants.
During the first stage, 300 teachers were sampled and 251 valid scales
(83.7%) returned. Later, the data was analyzed by exploratory factor
analysis to obtain 74.30% of total variance for the construct validity.
The Cronbach-s alpha coefficient of sum scale reliability was 0.94, and
subscale coefficients were between 0.80 and 0.96. In the second stage,
400 teachers were sampled and 318 valid scales (79.5%) returned.
Finally, this study used confirmatory factor analysis to test validity and
reliability of TCS-HPE. The result showed that the fit indexes reached
acceptable criteria(¤ç2
(246 ) =557.64 , p
Abstract: In today-s global and competitive market,
manufacturing companies are working hard towards improving their
production system performance. Most companies develop production
systems that can help in cost reduction. Manufacturing systems
consist of different elements including production methods,
machines, processes, control and information systems. Human issues
are an important part of manufacturing systems, yet most companies
do not pay sufficient attention to them. In this paper, a workforce
planning (WP) model is presented. A non-linear programming model
is developed in order to minimize the hiring, firing, training and
overtime costs. The purpose is to determine the number of workers
for each worker type, the number of workers trained, and the number
of overtime hours. Moreover, a decision support system (DSS) based
on the proposed model is introduced using the Excel-Lingo software
interfacing feature. This model will help to improve the interaction
between the workers, managers and the technical systems in
manufacturing.
Abstract: In modern distributed software systems, the issue of communication among composing parts represents a critical point, but the idea of extending conventional programming languages with general purpose communication constructs seems difficult to realize. As a consequence, there is a (growing) gap between the abstraction level required by distributed applications and the concepts provided by platforms that enable communication. This work intends to discuss how the Model Driven Software Development approach can be considered as a mature technology to generate in automatic way the schematic part of applications related to communication, by providing at the same time high level specialized languages useful in all the phases of software production. To achieve the goal, a stack of languages (meta-meta¬models) has been introduced in order to describe – at different levels of abstraction – the collaborative behavior of generic entities in terms of communication actions related to a taxonomy of messages. Finally, the generation of platforms for communication is viewed as a form of specification of language semantics, that provides executable models of applications together with model-checking supports and effective runtime environments.
Abstract: An empirical study of web applications that use
software frameworks is presented here. The analysis is based on two
approaches. In the first, developers using such frameworks are
required, based on their experience, to assign weights to parameters
such as database connection. In the second approach, a performance
testing tool, OpenSTA, is used to compute start time and other such
measures. From such an analysis, it is concluded that open source
software is superior to proprietary software. The motivation behind
this research is to examine ways in which a quantitative assessment
can be made of software in general and frameworks in particular.
Concepts such as metrics and architectural styles are discussed along
with previously published research.
Abstract: Some meta-schedulers query the information system of individual supercomputers in order to submit jobs to the least busy supercomputer on a computational Grid. However, this information can become outdated by the time a job starts due to changes in scheduling priorities. The MSR scheme is based on Multiple Simultaneous Requests and can take advantage of opportunities resulting from these priorities changes. This paper presents the SWARM meta-scheduler, which can speed up the execution of large sets of tasks by minimizing the job queuing time through the submission of multiple requests. Performance tests have shown that this new meta-scheduler is faster than an implementation of the MSR scheme and the gLite meta-scheduler. SWARM has been used through the GridQTL project beta-testing portal during the past year. Statistics are provided for this usage and demonstrate its capacity to achieve reliably a substantial reduction of the execution time in production conditions.
Abstract: Computers are being integrated in the various aspects
of human every day life in different shapes and abilities. This fact
has intensified a requirement for the software development
technologies which is ability to be: 1) portable, 2) adaptable, and 3)
simple to develop. This problem is also known as the Pervasive
Computing Problem (PCP) which can be implemented in different
ways, each has its own pros and cons and Context Oriented
Programming (COP) is one of the methods to address the PCP.
In this paper a design for a COP framework, a context aware
framework, is presented which has eliminated weak points of a
previous design based on interpreter languages, while introducing the
compiler languages power in implementing these frameworks.
The key point of this improvement is combining COP and
Dependency Injection (DI) techniques. Both old and new frameworks
are analyzed to show advantages and disadvantages. Finally a
simulation of both designs is proposed to indicating that the practical
results agree with the theoretical analysis while the new design runs
almost 8 times faster.
Abstract: This study aims to propose three evaluation methods to
evaluate the Tokyo Cap and Trade Program when emissions trading is
performed virtually among enterprises, focusing on carbon dioxide
(CO2), which is the only emitted greenhouse gas that tends to increase.
The first method clarifies the optimum reduction rate for the highest
cost benefit, the second discusses emissions trading among enterprises
through market trading, and the third verifies long-term emissions
trading during the term of the plan (2010-2019), checking the validity
of emissions trading partly using Geographic Information Systems
(GIS). The findings of this study can be summarized in the following
three points.
1. Since the total cost benefit is the greatest at a 44% reduction rate, it
is possible to set it more highly than that of the Tokyo Cap and
Trade Program to get more total cost benefit.
2. At a 44% reduction rate, among 320 enterprises, 8 purchasing
enterprises and 245 sales enterprises gain profits from emissions
trading, and 67 enterprises perform voluntary reduction without
conducting emissions trading. Therefore, to further promote
emissions trading, it is necessary to increase the sales volumes of
emissions trading in addition to sales enterprises by increasing the
number of purchasing enterprises.
3. Compared to short-term emissions trading, there are few enterprises
which benefit in each year through the long-term emissions trading
of the Tokyo Cap and Trade Program. Only 81 enterprises at the
most can gain profits from emissions trading in FY 2019. Therefore,
by setting the reduction rate more highly, it is necessary to increase
the number of enterprises that participate in emissions trading and
benefit from the restraint of CO2 emissions.
Abstract: The Shanghai Cooperation Organization is one of the successful outcomes of China's foreign policy since the end of the Cold war. The expansion of multilateral ties all over the world by dint of pursuing institutional strategies as SCO, identify China as a more constructive power. SCO became a new model of cooperation that was formed on remains of collapsed Soviet system, and predetermined China's geopolitical role in the region. As the fast developing effective regional mechanism, SCO today has more of external impact on the international system and forms a new type of interaction for promoting China's grand strategy of 'peaceful rise'.
Abstract: This paper describes a system-level SoC energy
consumption estimation method based on a dynamic behavior of
embedded software in the early stages of the SoC development. A
major problem of SOC development is development rework caused by
unreliable energy consumption estimation at the early stages. The
energy consumption of an SoC used in embedded systems is strongly
affected by the dynamic behavior of the software. At the early stages
of SoC development, modeling with a high level of abstraction is
required for both the dynamic behavior of the software, and the
behavior of the SoC. We estimate the energy consumption by a UML
model-based simulation. The proposed method is applied for an actual
embedded system in an MFP. The energy consumption estimation of
the SoC is more accurate than conventional methods and this proposed
method is promising to reduce the chance of development rework in
the SoC development. ∈
Abstract: In recent years, everything is trending toward digitalization
and with the rapid development of the Internet technologies,
digital media needs to be transmitted conveniently over the network.
Attacks, misuse or unauthorized access of information is of great
concern today which makes the protection of documents through
digital media a priority problem. This urges us to devise new data
hiding techniques to protect and secure the data of vital significance.
In this respect, steganography often comes to the fore as a tool for
hiding information. Steganography is a process that involves hiding
a message in an appropriate carrier like image or audio. It is of
Greek origin and means "covered or hidden writing". The goal of
steganography is covert communication. Here the carrier can be sent
to a receiver without any one except the authenticated receiver only
knows existence of the information. Considerable amount of work
has been carried out by different researchers on steganography. In this
work the authors propose a novel Steganographic method for hiding
information within the spatial domain of the gray scale image. The
proposed approach works by selecting the embedding pixels using
some mathematical function and then finds the 8 neighborhood of
the each selected pixel and map each bit of the secret message in
each of the neighbor pixel coordinate position in a specified manner.
Before embedding a checking has been done to find out whether the
selected pixel or its neighbor lies at the boundary of the image or not.
This solution is independent of the nature of the data to be hidden
and produces a stego image with minimum degradation.
Abstract: Many agent-oriented software engineering
methodologies have been proposed for software developing; however
their application is still limited due to their lack of maturity.
Evaluating the strengths and weaknesses of these methodologies
plays an important role in improving them and in developing new
stronger methodologies. This paper presents an evaluation framework
for agent-oriented methodologies, which addresses six major areas:
concepts, notation, process, pragmatics, support for software
engineering and marketability. The framework is then used to
evaluate the Gaia methodology to identify its strengths and
weaknesses, and to prove the ability of the framework for promoting
the agent-oriented methodologies by detecting their weaknesses in
detail.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: The importance of ensuring safe meat handling and
processing practices has been demonstrated in global reports on food
safety scares and related illness and deaths. This necessitated stricter
meat safety control strategies. Today, many countries have regulated
towards preventative and systematic control over safe meat
processing at abattoirs utilizing the Hazard Analysis Critical Control
Point (HACCP) principles. HACCP systems have been reported as
effective in managing food safety risks, if correctly implemented.
South Africa has regulated the Hygiene Management System (HMS)
based on HACCP principles applicable to abattoirs. Regulators utilise
the Hygiene Assessment System (HAS) to audit compliance at
abattoirs. These systems were benchmarked from the United
Kingdom (UK). Little research has been done them since inception as
of 2004. This paper presents a review of the two systems, its
implementation and comparison with HACCP. Recommendations are
made for future research to demonstrate the utility of the HMS and
HAS in assuring safe meat to consumers.
Abstract: Software testing is important stage of software development cycle. Current testing process involves tester and electronic documents with test case scenarios. In this paper we focus on new approach to testing process using automated test case generation and tester guidance through the system based on the model of the system. Test case generation and model-based testing is not possible without proper system model. We aim on providing better feedback from the testing process thus eliminating the unnecessary paper work.
Abstract: Deformable active contours are widely used in
computer vision and image processing applications for image
segmentation, especially in biomedical image analysis. The active
contour or “snake" deforms towards a target object by controlling the
internal, image and constraint forces. However, if the contour
initialized with a lesser number of control points, there is a high
probability of surpassing the sharp corners of the object during
deformation of the contour. In this paper, a new technique is
proposed to construct the initial contour by incorporating prior
knowledge of significant corners of the object detected using the
Harris operator. This new reconstructed contour begins to deform, by
attracting the snake towards the targeted object, without missing the
corners. Experimental results with several synthetic images show the
ability of the new technique to deal with sharp corners with a high
accuracy than traditional methods.
Abstract: Undoubtedly, chassis is one of the most important
parts of a vehicle. Chassis that today are produced for vehicles are
made up of four parts. These parts are jointed together by screwing.
Transverse parts are called cross member.
This study reviews the stress generated by cyclic laboratory loads
in front cross member of Peugeot 405. In this paper the finite element
method is used to simulate the welding process and to determine the
physical response of the spot-welded joints. Analysis is done by the
Abaqus software.
The Stresses generated in cross member structure are generally
classified into two groups: The stresses remained in form of residual
stresses after welding process and the mechanical stress generated by
cyclic load. Accordingly the total stress must be obtained by
determining residual stress and mechanical stress separately and then
sum them according to the superposition principle.
In order to improve accuracy, material properties including
physical, thermal and mechanical properties were supposed to be
temperature-dependent. Simulation shows that maximum Von Misses
stresses are located at special points. The model results are then
compared to the experimental results which are reported by
producing factory and good agreement is observed.
Abstract: The oil and gas industry has moved towards Load and
Resistance Factor Design through API RP2A - LRFD and the
recently published international standard, ISO-19902, for design of
fixed steel offshore structures. The ISO 19902 is intended to provide
a harmonized design practice that offers a balanced structural fitness
for the purpose, economy and safety. As part of an ongoing work, the
reliability analysis of tubular joints of the jacket structure has been
carried out to calibrate the load and resistance factors for the design
of offshore platforms in Malaysia, as proposed in the ISO.
Probabilistic models have been established for the load effects (wave,
wind and current) and the tubular joints strengths. In this study the
First Order Reliability Method (FORM), coded in MATLAB
Software has been employed to evaluate the reliability index of the
typical joints, designed using API RP2A - WSD and ISO 19902.