Abstract: Every organization is continually subject to new damages and threats which can be resulted from their operations or their goal accomplishment. Methods of providing the security of space and applied tools have been widely changed with increasing application and development of information technology (IT). From this viewpoint, information security management systems were evolved to construct and prevent reiterating the experienced methods. In general, the correct response in information security management systems requires correct decision making, which in turn requires the comprehensive effort of managers and everyone involved in each plan or decision making. Obviously, all aspects of work or decision are not defined in all decision making conditions; therefore, the possible or certain risks should be considered when making decisions. This is the subject of risk management and it can influence the decisions. Investigation of different approaches in the field of risk management demonstrates their progress from quantitative to qualitative methods with a process approach.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.
Abstract: This Classifying Bird Sounds (chip notes) project-s
purpose is to reduce the unwanted noise from recorded bird sound
chip notes, design a scheme to detect differences and similarities
between recorded chip notes, and classify bird sound chip notes. The
technologies of determining the similarities of sound waves have
been used in communication, sound engineering and wireless sound
applications for many years. Our research is focused on the similarity
of chip notes, which are the sounds from different birds. The program
we use is generated by Microsoft Cµ.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: A key to success of high quality software development
is to define valid and feasible requirements specification. We have
proposed a method of model-driven requirements analysis using
Unified Modeling Language (UML). The main feature of our method
is to automatically generate a Web user interface mock-up from UML
requirements analysis model so that we can confirm validity of
input/output data for each page and page transition on the system by
directly operating the mock-up. This paper proposes a support method
to check the validity of a data life cycle by using a model checking tool
“UPPAAL" focusing on CRUD (Create, Read, Update and Delete).
Exhaustive checking improves the quality of requirements analysis
model which are validated by the customers through automatically
generated mock-up. The effectiveness of our method is discussed by a
case study of requirements modeling of two small projects which are a
library management system and a supportive sales system for text
books in a university.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Abstract: This paper considers the effect of heat generation
proportional l to (T - T∞ )p , where T is the local temperature and T∞
is the ambient temperature, in unsteady free convection flow near the
stagnation point region of a three-dimensional body. The fluid is
considered in an ambient fluid under the assumption of a step change
in the surface temperature of the body. The non-linear coupled partial
differential equations governing the free convection flow are solved
numerically using an implicit finite-difference method for different
values of the governing parameters entering these equations. The
results for the flow and heat characteristics when p ≤ 2 show that
the transition from the initial unsteady-state flow to the final steadystate
flow takes place smoothly. The behavior of the flow is seen
strongly depend on the exponent p.
Abstract: We investigated oxidative DNA damage caused by
radio frequency radiation using 8-oxo-7, 8-dihydro-2'-
deoxyguanosine (8-oxodG) generated in mice tissues after exposure
to 900 MHz mobile phone radio frequency in three independent
experiments. The RF was generated by a Global System for Mobile
Communication (GSM) signal generator. The radio frequency field
was adjusted to 25 V/m. The whole body specific absorption rate
(SAR) was 1.0 W/kg. Animals were exposed to this field for 30 min
daily for 30 days. 24 h post-exposure, blood serum, brain and spleen
were removed and DNA was isolated. Enzyme-linked
immunosorbent assay (ELISA) was used to measure 8-oxodG
concentration. All animals survived the whole experimental period.
The body weight of animals did not change significantly at the end of
the experiment. No statistically significant differences observed in
the levels of oxidative stress. Our results are not in favor of the
hypothesis that 900 MHz RF induces oxidative damage.
Abstract: Researchers have been applying tional intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI
methods with respect to each game application. In th
our experimental result on the comparison of three evolutionary algorithms – evolution strategy, genetic algorithm, and their hybrid
applied to evolving controller agents for the CIG 2007 Simulated Car Racing competition. Our experimental result shows that, premature
convergence of solutions was observed in the case of ES, and GA outperformed ES in the last half of generations. Besides, a hybrid
which uses GA first and ES next evolved the best solution among the whole solutions being generated. This result shows the ability of GA in
globally searching promising areas in the early stage and the ability of ES in locally searching the focused area (fine-tuning solutions).
Abstract: Nevertheless the widespread application of finite
mixture models in segmentation, finite mixture model selection is
still an important issue. In fact, the selection of an adequate number
of segments is a key issue in deriving latent segments structures and
it is desirable that the selection criteria used for this end are effective.
In order to select among several information criteria, which may
support the selection of the correct number of segments we conduct a
simulation study. In particular, this study is intended to determine
which information criteria are more appropriate for mixture model
selection when considering data sets with only categorical
segmentation base variables. The generation of mixtures of
multinomial data supports the proposed analysis. As a result, we
establish a relationship between the level of measurement of
segmentation variables and some (eleven) information criteria-s
performance. The criterion AIC3 shows better performance (it
indicates the correct number of the simulated segments- structure
more often) when referring to mixtures of multinomial segmentation
base variables.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: This paper presents the region based segmentation method for ultrasound images using local statistics. In this segmentation approach the homogeneous regions depends on the image granularity features, where the interested structures with dimensions comparable to the speckle size are to be extracted. This method uses a look up table comprising of the local statistics of every pixel, which are consisting of the homogeneity and similarity bounds according to the kernel size. The shape and size of the growing regions depend on this look up table entries. The algorithms are implemented by using connected seeded region growing procedure where each pixel is taken as seed point. The region merging after the region growing also suppresses the high frequency artifacts. The updated merged regions produce the output in formed of segmented image. This algorithm produces the results that are less sensitive to the pixel location and it also allows a segmentation of the accurate homogeneous regions.
Abstract: Knowledge development in companies relies on
knowledge-intensive business processes, which are characterized by
a high complexity in their execution, weak structuring,
communication-oriented tasks and high decision autonomy, and often the need for creativity and innovation. A foundation of knowledge development is provided, which is based on a new conception of
knowledge and knowledge dynamics. This conception consists of a three-dimensional model of knowledge with types, kinds and qualities. Built on this knowledge conception, knowledge dynamics is
modeled with the help of general knowledge conversions between
knowledge assets. Here knowledge dynamics is understood to cover
all of acquisition, conversion, transfer, development and usage of
knowledge. Through this conception we gain a sound basis for
knowledge management and development in an enterprise. Especially
the type dimension of knowledge, which categorizes it according to
its internality and externality with respect to the human being, is crucial for enterprise knowledge management and development,
because knowledge should be made available by converting it to
more external types.
Built on this conception, a modeling approach for knowledgeintensive
business processes is introduced, be it human-driven,e-driven or task-driven processes. As an example for this approach, a model of the creative activity for the renewal planning of
a product is given.
Abstract: Concrete strength evaluated from compression tests
on cores is affected by several factors causing differences from the
in-situ strength at the location from which the core specimen was
extracted. Among the factors, there is the damage possibly occurring
during the drilling phase that generally leads to underestimate the
actual in-situ strength. In order to quantify this effect, in this study
two wide datasets have been examined, including: (i) about 500 core
specimens extracted from Reinforced Concrete existing structures,
and (ii) about 600 cube specimens taken during the construction of
new structures in the framework of routine acceptance control. The
two experimental datasets have been compared in terms of
compression strength and specific weight values, accounting for the
main factors affecting a concrete property, that is type and amount of
cement, aggregates' grading, type and maximum size of aggregates,
water/cement ratio, placing and curing modality, concrete age. The
results show that the magnitude of the strength reduction due to
drilling damage is strongly affected by the actual properties of
concrete, being inversely proportional to its strength. Therefore, the
application of a single value of the correction coefficient, as generally
suggested in the technical literature and in structural codes, appears
inappropriate. A set of values of the drilling damage coefficient is
suggested as a function of the strength obtained from compressive
tests on cores.
Abstract: In this study, a synthetic pathway was created by
assembling genes from Clostridium butyricum and Escherichia coli
in different combinations. Among the genes were dhaB1 and dhaB2
from C. butyricum VPI1718 coding for glycerol dehydratase (GDHt)
and its activator (GDHtAc), respectively, involved in the conversion
of glycerol to 3-hydroxypropionaldehyde (3-HPA). The yqhD gene
from E.coli BL21 was also included which codes for an NADPHdependent
1,3-propanediol oxidoreductase isoenzyme (PDORI)
reducing 3-HPA to 1,3-propanediol (1,3-PD). Molecular modeling
analysis indicated that the conformation of fusion protein of YQHD
and DHAB1 was favorable for direct molecular channeling of the
intermediate 3-HPA. According to the simulation results, the yqhD
and dhaB1 gene were assembled in the upstream of dhaB2 to express
a fusion protein, yielding the recombinant strain E. coliBL21
(DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP41Y3). Strain
BP41Y3 gave 10-fold higher 1,3-PD concentration than E. coliBL21
(DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP31Y2) expressing
the recombinant enzymes simultaneously but in a non-fusion mode.
This is the first report using a gene fusion approach to enhance the
biological conversion of glycerol to the value added compound 1,3-
PD.
Abstract: Grid computing is a group of clusters connected over
high-speed networks that involves coordinating and sharing
computational power, data storage and network resources operating
across dynamic and geographically dispersed locations. Resource
management and job scheduling are critical tasks in grid computing.
Resource selection becomes challenging due to heterogeneity and
dynamic availability of resources. Job scheduling is a NP-complete
problem and different heuristics may be used to reach an optimal or
near optimal solution. This paper proposes a model for resource and
job scheduling in dynamic grid environment. The main focus is to
maximize the resource utilization and minimize processing time of
jobs. Grid resource selection strategy is based on Max Heap Tree
(MHT) that best suits for large scale application and root node of
MHT is selected for job submission. Job grouping concept is used to
maximize resource utilization for scheduling of jobs in grid
computing. Proposed resource selection model and job grouping
concept are used to enhance scalability, robustness, efficiency and
load balancing ability of the grid.
Abstract: We present a method for fast volume rendering using
graphics hardware (GPU). To our knowledge, it is the first implementation
on the GPU. Based on the Shear-Warp algorithm, our
GPU-based method provides real-time frame rates and outperforms
the CPU-based implementation. When the number of slices is not
sufficient, we add in-between slices computed by interpolation. This
improves then the quality of the rendered images. We have also
implemented the ray marching algorithm on the GPU. The results
generated by the three algorithms (CPU-based and GPU-based Shear-
Warp, GPU-based Ray Marching) for two test models has proved that
the ray marching algorithm outperforms the shear-warp methods in
terms of speed up and image quality.
Abstract: In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.
Abstract: Iterative learning control aims to achieve zero tracking
error of a specific command. This is accomplished by iteratively
adjusting the command given to a feedback control system, based on
the tracking error observed in the previous iteration. One would like
the iterations to converge to zero tracking error in spite of any error
present in the model used to design the learning law. First, this need
for stability robustness is discussed, and then the need for robustness
of the property that the transients are well behaved. Methods of
producing the needed robustness to parameter variations and to
singular perturbations are presented. Then a method involving
reverse time runs is given that lets the world behavior produce the
ILC gains in such a way as to eliminate the need for a mathematical
model. Since the real world is producing the gains, there is no issue
of model error. Provided the world behaves linearly, the approach
gives an ILC law with both stability robustness and good transient
robustness, without the need to generate a model.
Abstract: The purpose of this article applies the monthly final
energy yield and failure data of 202 PV systems installed in Taiwan to
analyze the PV operational performance and system availability. This
data is collected by Industrial Technology Research Institute through
manual records. Bad data detection and failure data estimation
approaches are proposed to guarantee the quality of the received
information. The performance ratio value and system availability are
then calculated and compared with those of other countries. It is
indicated that the average performance ratio of Taiwan-s PV systems
is 0.74 and the availability is 95.7%. These results are similar with
those of Germany, Switzerland, Italy and Japan.