Abstract: This paper highlights an innovative and nontraditional
violence prevention program that is making a noticeable impact in
what was once one of the country’s most violent communities. With
unique and tailored strategies, the Operation Peacemaker Fellowship,
established in Richmond, California, combines components of
evidence-based practices with a community-oriented focus on
relationships and mentoring to fill a gap in services and increase
community safety. In an effort to highlight these unique strategies
and provide a blueprint for other communities with violent crime
problems, the authors of this paper hope to clearly delineate how one
community is moving forward with vanguard approaches to invest in
the lives of young men who once were labeled their community’s
most violent, even most deadly, youth. The impact of this program is
evidenced through the fellows’ own voices as they illuminate the
experience of being in the Fellowship. In interviews, fellows describe
how participating in this program has transformed their lives and the
lives of those they love. The authors of this article spent more than
two years researching this Fellowship program in order to conduct an
evaluation of it and, ultimately, to demonstrate how this program is a
testament to the power of relationships and love combined with
evidence-based practices, consequently enriching the lives of youth
and the community that embraces them.
Abstract: There are real needs to integrate types of Open
Educational Resources (OER) with an intelligent system to extract
information and knowledge in the semantic searching level. The
needs came because most of current learning standard adopted web
based learning and the e-learning systems do not always serve all
educational goals. Semantic Web systems provide educators,
students, and researchers with intelligent queries based on a semantic
knowledge management learning system. An ontology-based learning
system is an advanced system, where ontology plays the core of the
semantic web in a smart learning environment. The objective of this
paper is to discuss the potentials of ontologies and mapping different
kinds of ontologies; heterogeneous or homogenous to manage and
control different types of Open Educational Resources. The important
contribution of this research is that it uses logical rules and
conceptual relations to map between ontologies of different
educational resources. We expect from this methodology to establish
an intelligent educational system supporting student tutoring, self and
lifelong learning system.
Abstract: Floorplanning plays a vital role in the physical design
process of Very Large Scale Integrated (VLSI) chips. It is an
essential design step to estimate the chip area prior to the optimized
placement of digital blocks and their interconnections. Since VLSI
floorplanning is an NP-hard problem, many optimization techniques
were adopted in the literature. In this work, a music-inspired
Harmony Search (HS) algorithm is used for the fixed die outline
constrained floorplanning, with the aim of reducing the total chip
area. HS draws inspiration from the musical improvisation process of
searching for a perfect state of harmony. Initially, B*-tree is used to
generate the primary floorplan for the given rectangular hard
modules and then HS algorithm is applied to obtain an optimal
solution for the efficient floorplan. The experimental results of the
HS algorithm are obtained for the MCNC benchmark circuits.
Abstract: In this paper, we propose a new method for threedimensional
object indexing based on D.A.M.C-S.H.C descriptor
(Direct and Analytical Method for Calculating the Spherical
Harmonics Coefficients). For this end, we propose a direct
calculation of the coefficients of spherical harmonics with perfect
precision. The aims of the method are to minimize, the processing
time on the 3D objects database and the searching time of similar
objects to a request object.
Firstly we start by defining the new descriptor using a new
division of 3-D object in a sphere. Then we define a new distance
which will be tested and prove his efficiency in the search for similar
objects in the database in which we have objects with very various
and important size.
Abstract: The problems arising from unbalanced data sets
generally appear in real world applications. Due to unequal class
distribution, many researchers have found that the performance of
existing classifiers tends to be biased towards the majority class. The
k-nearest neighbors’ nonparametric discriminant analysis is a method
that was proposed for classifying unbalanced classes with good
performance. In this study, the methods of discriminant analysis are
of interest in investigating misclassification error rates for classimbalanced
data of three diabetes risk groups. The purpose of this
study was to compare the classification performance between
parametric discriminant analysis and nonparametric discriminant
analysis in a three-class classification of class-imbalanced data of
diabetes risk groups. Data from a project maintaining healthy
conditions for 599 employees of a government hospital in Bangkok
were obtained for the classification problem. The employees were
divided into three diabetes risk groups: non-risk (90%), risk (5%),
and diabetic (5%). The original data including the variables of
diabetes risk group, age, gender, blood glucose, and BMI were
analyzed and bootstrapped for 50 and 100 samples, 599 observations
per sample, for additional estimation of the misclassification error
rate. Each data set was explored for the departure of multivariate
normality and the equality of covariance matrices of the three risk
groups. Both the original data and the bootstrap samples showed nonnormality
and unequal covariance matrices. The parametric linear
discriminant function, quadratic discriminant function, and the
nonparametric k-nearest neighbors’ discriminant function were
performed over 50 and 100 bootstrap samples and applied to the
original data. Searching the optimal classification rule, the choices of
prior probabilities were set up for both equal proportions (0.33: 0.33:
0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10)
and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples
indicated that the k-nearest neighbors approach when k=3 or k=4 and
the defined prior probabilities of non-risk: risk: diabetic as 0.90:
0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of
misclassification. The k-nearest neighbors approach would be
suggested for classifying a three-class-imbalanced data of diabetes
risk groups.
Abstract: With the growing of computer and network, digital
data can be spread to anywhere in the world quickly. In addition,
digital data can also be copied or tampered easily so that the security
issue becomes an important topic in the protection of digital data.
Digital watermark is a method to protect the ownership of digital data.
Embedding the watermark will influence the quality certainly. In this
paper, Vector Quantization (VQ) is used to embed the watermark into
the image to fulfill the goal of data hiding. This kind of watermarking
is invisible which means that the users will not conscious the existing
of embedded watermark even though the embedded image has tiny
difference compared to the original image. Meanwhile, VQ needs a lot
of computation burden so that we adopt a fast VQ encoding scheme by
partial distortion searching (PDS) and mean approximation scheme to
speed up the data hiding process.
The watermarks we hide to the image could be gray, bi-level and
color images. Texts are also can be regarded as watermark to embed.
In order to test the robustness of the system, we adopt Photoshop to
fulfill sharpen, cropping and altering to check if the extracted
watermark is still recognizable. Experimental results demonstrate that
the proposed system can resist the above three kinds of tampering in
general cases.
Abstract: The aim of software maintenance is to maintain
the software system in accordance with advancement in software
and hardware technology. One of the early works on software
maintenance is to extract information at higher level of abstraction. In
this paper, we present the process of how to design an information
extraction tool for software maintenance. The tool can extract the
basic information from old programs such as about variables, based
classes, derived classes, objects of classes, and functions. The tool
have two main parts; the lexical analyzer module that can read the
input file character by character, and the searching module which
users can get the basic information from the existing programs. We
implemented this tool for a patterned sub-C++ language as an input
file.
Abstract: Skin aging is a slow multifactorial process influenced
by both internal as well as external factors. Ultra-violet radiations
(UV), diet, smoking and personal habits are the most common
environmental factors that affect skin aging. Fat contents and fibrous
proteins as collagen and elastin are core internal structural
components. The direct influence of UV on elastin integrity and
health is central on aging of skin especially by time. The deposition
of abnormal elastic material is a major marker in a photo-aged skin.
Searching for compounds that may protect against cutaneous photodamage
is exceedingly valued. Retinoids and alpha hydroxy acids
have been endorsed by some researchers as possible candidates for
protecting and or repairing the effect of UV damaged skin. For
consolidating a better system of anti- and protective effects of such
anti-aging agents, we evaluated the combinatory effects of various
dosages of lactic acid and retinol on the dermal fibroblast’s elastin
levels exposed to UV. The UV exposed cells showed significant
reduction in the elastin levels. A combination of drugs with a higher
concentration of lactic acid (30 -35 mM) and a lower concentration of
retinol (10-15mg/mL) showed to work better in maintaining elastin
concentration in UV exposed cells. We assume this preservation
could be the result of increased tropo-elastin gene expression
stimulated by retinol whereas lactic acid probably repaired the UV
irradiated damage by enhancing the amount and integrity of the
elastin fibers.
Abstract: This article deals with geographical conditions in
terrain and their effect on the movement of vehicles, their effect on
speed and safety of movement of people and vehicles. Finding of the
optimal routes outside the communication is studied in the Army
environment, but it occur in civilian as well, primarily in crisis
situation, or by the provision of assistance when natural disasters
such as floods, fires, storms etc., have happened. These movements
require the optimization of routes when effects of geographical
factors should be included. The most important factor is the surface
of a terrain. It is based on several geographical factors as are slopes,
soil conditions, micro-relief, a type of surface and meteorological
conditions. Their mutual impact has been given by coefficient of
deceleration. This coefficient can be used for the commander`s
decision. New approaches and methods of terrain testing,
mathematical computing, mathematical statistics or cartometric
investigation are necessary parts of this evaluation.
Abstract: Quantification of cardiac function is performed by
calculating blood volume and ejection fraction in routine clinical
practice. However, these works have been performed by manual
contouring, which requires computational costs and varies on the
observer. In this paper, an automatic left ventricle segmentation
algorithm on cardiac magnetic resonance images (MRI) is presented.
Using knowledge on cardiac MRI, a K-mean clustering technique is
applied to segment blood region on a coil-sensitivity corrected image.
Then, a graph searching technique is used to correct segmentation
errors from coil distortion and noises. Finally, blood volume and
ejection fraction are calculated. Using cardiac MRI from 15 subjects,
the presented algorithm is tested and compared with manual
contouring by experts to show outstanding performance.
Abstract: In this paper, we propose a method for three-dimensional
(3-D)-model indexing based on defining a new
descriptor, which we call new descriptor using spherical harmonics.
The purpose of the method is to minimize, the processing time on the
database of objects models and the searching time of similar objects
to request object.
Firstly we start by defining the new descriptor using a new
division of 3-D object in a sphere. Then we define a new distance
which will be used in the search for similar objects in the database.
Abstract: An algorithm is a well-defined procedure that takes
some input in the form of some values, processes them and gives the
desired output. It forms the basis of many other algorithms such as
searching, pattern matching, digital filters etc., and other applications
have been found in database systems, data statistics and processing,
data communications and pattern matching. This paper introduces
algorithmic “Enhanced Bidirectional Selection” sort which is
bidirectional, stable. It is said to be bidirectional as it selects two
values smallest from the front and largest from the rear and assigns
them to their appropriate locations thus reducing the number of
passes by half the total number of elements as compared to selection
sort.
Abstract: The objective of the Economic Dispatch(ED) Problems
of electric power generation is to schedule the committed generating
units outputs so as to meet the required load demand at minimum
operating cost while satisfying all units and system equality and
inequality constraints. This paper presents a new method of ED
problems utilizing the Max-Min Ant System Optimization.
Historically, traditional optimizations techniques have been used,
such as linear and non-linear programming, but within the past
decade the focus has shifted on the utilization of Evolutionary
Algorithms, as an example Genetic Algorithms, Simulated Annealing
and recently Ant Colony Optimization (ACO). In this paper we
introduce the Max-Min Ant System based version of the Ant System.
This algorithm encourages local searching around the best solution
found in each iteration. To show its efficiency and effectiveness, the
proposed Max-Min Ant System is applied to sample ED problems
composed of 4 generators. Comparison to conventional genetic
algorithms is presented.
Abstract: The objective is to study the satisfaction on English with an online learning. Online learning system mainly consists of English lessons, exercises, tests, web boards, and supplementary lessons for language practice. The sample groups are 80 Thai students studying English for Business Communication, majoring in Hotel and Lodging Management. The data are analyzed by mean, standard deviation (S.D.) value from the questionnaires. The results were found that the most average of satisfaction on academic aspects are technological searching tool through E-learning system that support the students’ learning (4.51), knowledge evaluation on pre-post learning and teaching (4.45), and change for project selections according to their interest, subject contents including practice in the real situations (4.45), respectively.
Abstract: This study deals with an advanced numerical
techniques to detect tensile forces in cable-stayed structures. The
proposed method allows us not only to avoid the trap of minimum at
initial searching stage but also to find their final solutions in better
numerical efficiency. The validity of the technique is numerically
verified using a set of dynamic data obtained from a simulation of the
cable model modeled using the finite element method. The results
indicate that the proposed method is computationally efficient in
characterizing the tensile force variation for cable-stayed structures.
Abstract: Search is the most obvious application of information
retrieval. The variety of widely obtainable biomedical data is
enormous and is expanding fast. This expansion makes the existing
techniques are not enough to extract the most interesting patterns
from the collection as per the user requirement. Recent researches are
concentrating more on semantic based searching than the traditional
term based searches. Algorithms for semantic searches are
implemented based on the relations exist between the words of the
documents. Ontologies are used as domain knowledge for identifying
the semantic relations as well as to structure the data for effective
information retrieval. Annotation of data with concepts of ontology is
one of the wide-ranging practices for clustering the documents. In
this paper, indexing based on concept and annotation are proposed
for clustering the biomedical documents. Fuzzy c-means (FCM)
clustering algorithm is used to cluster the documents. The
performances of the proposed methods are analyzed with traditional
term based clustering for PubMed articles in five different diseases
communities. The experimental results show that the proposed
methods outperform the term based fuzzy clustering.
Abstract: Object detection using Wavelet Neural Network (WNN) plays a major contribution in the analysis of image processing. Existing cluster-based algorithm for co-saliency object detection performs the work on the multiple images. The co-saliency detection results are not desirable to handle the multi scale image objects in WNN. Existing Super Resolution (SR) scheme for landmark images identifies the corresponding regions in the images and reduces the mismatching rate. But the Structure-aware matching criterion is not paying attention to detect multiple regions in SR images and fail to enhance the result percentage of object detection. To detect the objects in the high-resolution remote sensing images, Tagged Grid Matching (TGM) technique is proposed in this paper. TGM technique consists of the three main components such as object determination, object searching and object verification in WNN. Initially, object determination in TGM technique specifies the position and size of objects in the current image. The specification of the position and size using the hierarchical grid easily determines the multiple objects. Second component, object searching in TGM technique is carried out using the cross-point searching. The cross out searching point of the objects is selected to faster the searching process and reduces the detection time. Final component performs the object verification process in TGM technique for identifying (i.e.,) detecting the dissimilarity of objects in the current frame. The verification process matches the search result grid points with the stored grid points to easily detect the objects using the Gabor wavelet Transform. The implementation of TGM technique offers a significant improvement on the multi-object detection rate, processing time, precision factor and detection accuracy level.
Abstract: In general, algorithms to find continuous k-nearest neighbors have been researched on the location based services, monitoring periodically the moving objects such as vehicles and mobile phone. Those researches assume the environment that the number of query points is much less than that of moving objects and the query points are not moved but fixed. In gaming environments, this problem is when computing the next movement considering the neighbors such as flocking, crowd and robot simulations. In this case, every moving object becomes a query point so that the number of query point is same to that of moving objects and the query points are also moving. In this paper, we analyze the performance of the existing algorithms focused on location based services how they operate under gaming environments.
Abstract: This paper describes an identification of specific shapes within binary images using the morphological Hit-or-Miss Transform (HMT). Hit-or-Miss transform is a general binary morphological operation that can be used in searching of particular patterns of foreground and background pixels in an image. It is actually a basic operation of binary morphology since almost all other binary morphological operators are derived from it. The input of this method is a binary image and a structuring element (a template which will be searched in a binary image) while the output is another binary image. In this paper a modification of Hit-or-Miss transform has been proposed. The accuracy of algorithm is adjusted according to the similarity of the template and the sought template. The implementation of this method has been done by C language. The algorithm has been tested on several images and the results have shown that this new method can be used for similar shape detection.
Abstract: One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as Ontology Framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely Analysis Tools and Operating System. By using the concept of Ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing Forward Chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the Ontology.