Abstract: In this study, we present a new and fast algorithm for lung segmentation using CTA images. This process is quite important especially at lung vessel segmentation, detection of pulmonary emboly, finding nodules or segmentation of airways. Applied method has been carried out at four steps. At first step, images have been applied optimal threshold. At the second one, the subsegment vessels, which have a place in lung region and which are in small dimension, have been removed. At the third one, identifying and segmentation of lungs and airway edges have been carried out. Lastly, by throwing away the airway, lung segmentation has been presented.
Abstract: Despite the fact that Knowledge Sharing (KS) is very important, we found only little discussion about the reasons why people have the willingness to share knowledge at such platform even though there is no immediate benefit to the persons who contribute knowledge in it. The aim of this study is to develop an integrative understanding of the factors that support or inhibit individuals- knowledge sharing intentions in virtual communities and to find whether habit would generate people-s willingness to be involved. We apply Social Capital Theory (SCT), and we also add two dimensions for discussion: member incentive and habitual domain (HD). This research assembles the questionnaire from individuals who have experienced knowledge sharing in virtual communities, and applies survey and Structural Equation Model (SEM) to analyze the results from the questionnaires. Finally, results confirm that individuals are willing to share knowledge in virtual communities: (1) if they consider reciprocity, centrality, and have longer tenure in their field, and enjoy helping. (2) if they have the habit of sharing knowledge. This study is useful for the developers of virtual communities to insight into knowledge sharing in cyberspace.
Abstract: The many feasible alternatives and conflicting
objectives make equipment selection in materials handling a
complicated task. This paper presents utilizing Monte Carlo (MC)
simulation combined with the Analytic Hierarchy Process (AHP) to
evaluate and select the most appropriate Material Handling
Equipment (MHE). The proposed hybrid model was built on the base
of material handling equation to identify main and sub criteria critical
to MHE selection. The criteria illustrate the properties of the material
to be moved, characteristics of the move, and the means by which the
materials will be moved. The use of MC simulation beside the AHP
is very powerful where it allows the decision maker to represent
his/her possible preference judgments as random variables. This will
reduce the uncertainty of single point judgment at conventional AHP,
and provide more confidence in the decision problem results. A small
business pharmaceutical company is used as an example to illustrate
the development and application of the proposed model.
Abstract: In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation.
Abstract: The primary aim of the e-government applications is
the fast citizen service and the accomplishment of governmental
functions. This paper discusses the knowledge management for egovernment
development in the needs and role. The paper focused
on analyzing the advantages of using knowledge management by
using the existing IT technologies to maximize the government
functions efficiency. The proposed new approach of providing
government services is based on using Knowledge management as a
part of e-government system.
Abstract: The novelty proposed in this study is twofold and consists in the developing of a new color similarity metric based on the human visual system and a new color indexing based on a textual approach. The new color similarity metric proposed is based on the color perception of the human visual system. Consequently the results returned by the indexing system can fulfill as much as possibile the user expectations. We developed a web application to collect the users judgments about the similarities between colors, whose results are used to estimate the metric proposed in this study. In order to index the image's colors, we used a text indexing engine to facilitate the integration of visual features in a database of text documents. The textual signature is build by weighting the image's colors in according to their occurrence in the image. The use of a textual indexing engine, provide us a simple, fast and robust solution to index images. A typical usage of the system proposed in this study, is the development of applications whose data type is both visual and textual. In order to evaluate the proposed method we chose a price comparison engine as a case of study, collecting a series of commercial offers containing the textual description and the image representing a specific commercial offer.
Abstract: Midpoint filter is quite effective in recovering the
images confounded by the short-tailed (uniform) noise. It, however,
performs poorly in the presence of additive long-tailed (impulse)
noise and it does not preserve the edge structures of the image
signals. Median smoother discards outliers (impulses) effectively, but
it fails to provide adequate smoothing for images corrupted with nonimpulse
noise. In this paper, two nonlinear techniques for image
filtering, namely, New Filter I and New Filter II are proposed based
on a nonlinear high-pass filter algorithm. New Filter I is constructed
using a midpoint filter, a highpass filter and a combiner. It suppresses
uniform noise quite well. New Filter II is configured using an alpha
trimmed midpoint filter, a median smoother of window size 3x3, the
high pass filter and the combiner. It is robust against impulse noise
and attenuates uniform noise satisfactorily. Both the filters are shown
to exhibit good response at the image boundaries (edges). The
proposed filters are evaluated for their performance on a test image
and the results obtained are included.
Abstract: The prediction of meteorological parameters at a
meteorological station is an interesting and open problem. A firstorder
linear dynamic model GM(1,1) is the main component of the
grey system theory. The grey model requires only a few previous data
points in order to make a real-time forecast. In this paper, we
consider the daily average ambient temperature as a time series and
the grey model GM(1,1) applied to local prediction (short-term
prediction) of the temperature. In the same case study we use a fuzzy
predictive model for global prediction. We conclude the paper with a
comparison between local and global prediction schemes.
Abstract: The impact of rain attenuation on wireless communication signals is predominant because of the used high frequency (above 10 GHz). The knowledge of statistics of attenuation is very important for planning point-to-point microwave links operating in high frequency band. Describing the statistics of attenuation is possible for instance with fade duration or level crossing rate. In our examination we determine these statistics from one year measured data for a given microwave link, and we are going to make an attempt to transform the level crossing rate statistic to fade duration statistic.
Abstract: In this paper the authors present the framework of a
system for assisting users through counseling on personal health, the
Personal Health Assistance Service Expert System (PHASES).
Personal health assistance systems need Personal Health Records
(PHR), which support wellness activities, improve the understanding
of personal health issues, enable access to data from providers of
health services, strengthen health promotion, and in the end improve
the health of the population. This is especially important in societies
where the health costs increase at a higher rate than the overall
economy. The most important elements of a healthy lifestyle are
related to food (such as balanced nutrition and diets), activities for
body fitness (such as walking, sports, fitness programs), and other
medical treatments (such as massage, prescriptions of drugs). The
PHASES framework uses an ontology of food, which includes
nutritional facts, an expert system keeping track of personal health
data that are matched with medical treatments, and a comprehensive
data transfer between patients and the system.
Abstract: Reducing energy consumption of embedded systems requires careful memory management. It has been shown that Scratch- Pad Memories (SPMs) are low size, low cost, efficient (i.e. energy saving) data structures directly managed at the software level. In this paper, the focus is on heuristic methods for SPMs management. A method is efficient if the number of accesses to SPM is as large as possible and if all available space (i.e. bits) is used. A Tabu Search (TS) approach for memory management is proposed which is, to the best of our knowledge, a new original alternative to the best known existing heuristic (BEH). In fact, experimentations performed on benchmarks show that the Tabu Search method is as efficient as BEH (in terms of energy consumption) but BEH requires a sorting which can be computationally expensive for a large amount of data. TS is easy to implement and since no sorting is necessary, unlike BEH, the corresponding sorting time is saved. In addition to that, in a dynamic perspective where the maximum capacity of the SPM is not known in advance, the TS heuristic will perform better than BEH.
Abstract: The image segmentation method described in this
paper has been developed as a pre-processing stage to be used in
methodologies and tools for video/image indexing and retrieval by
content. This method solves the problem of whole objects extraction
from background and it produces images of single complete objects
from videos or photos. The extracted images are used for calculating
the object visual features necessary for both indexing and retrieval
processes.
The segmentation algorithm is based on the cooperation among an
optical flow evaluation method, edge detection and region growing
procedures. The optical flow estimator belongs to the class of
differential methods. It permits to detect motions ranging from a
fraction of a pixel to a few pixels per frame, achieving good results in
presence of noise without the need of a filtering pre-processing stage
and includes a specialised model for moving object detection.
The first task of the presented method exploits the cues from
motion analysis for moving areas detection. Objects and background
are then refined using respectively edge detection and seeded region
growing procedures. All the tasks are iteratively performed until
objects and background are completely resolved.
The method has been applied to a variety of indoor and outdoor
scenes where objects of different type and shape are represented on
variously textured background.
Abstract: Ever since industrial revolution began, our ecosystem
has changed. And indeed, the negatives outweigh the positives.
Industrial waste usually released into all kinds of body of water, such
as river or sea. Tempeh waste is one example of waste that carries
many hazardous and unwanted substances that will affect the
surrounding environment. Tempeh is a popular fermented food in
Asia which is rich in nutrients and active substances. Tempeh liquid
waste- in particular- can cause an air pollution, and if penetrates
through the soil, it will contaminates ground-water, making it
unavailable for the water to be consumed. Moreover, bacteria will
thrive within the polluted water, which often responsible for causing
many kinds of diseases. The treatment used for this chemical waste is
biological treatment such as constructed wetland and activated
sludge. These kinds of treatment are able to reduce both physical and
chemical parameters altogether such as temperature, TSS, pH, BOD,
COD, NH3-N, NO3-N, and PO4-P. These treatments are implemented
before the waste is released into the water. The result is a
comparation between constructed wetland and activated sludge,
along with determining which method is better suited to reduce the
physical and chemical subtances of the waste.
Abstract: Program slicing is the task of finding all statements in
a program that directly or indirectly influence the value of a variable
occurrence. The set of statements that can affect the value of a
variable at some point in a program is called a program backward
slice. In several software engineering applications, such as program
debugging and measuring program cohesion and parallelism, several
slices are computed at different program points. The existing
algorithms for computing program slices are introduced to compute a
slice at a program point. In these algorithms, the program, or the
model that represents the program, is traversed completely or
partially once. To compute more than one slice, the same algorithm
is applied for every point of interest in the program. Thus, the same
program, or program representation, is traversed several times.
In this paper, an algorithm is introduced to compute all forward
static slices of a computer program by traversing the program
representation graph once. Therefore, the introduced algorithm is
useful for software engineering applications that require computing
program slices at different points of a program. The program
representation graph used in this paper is called Program Dependence
Graph (PDG).
Abstract: In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.
Abstract: Problem-based learning (PBL) is one of the student
centered approaches and has been considered by a number of higher
educational institutions in many parts of the world as a method of
delivery. This paper presents a creative thinking approach for
implementing Problem-based Learning in Mechanics of Structure
within a Malaysian Polytechnics environment. In the learning
process, students learn how to analyze the problem given among the
students and sharing classroom knowledge into practice. Further,
through this course-s emphasis on problem-based learning, students
acquire creative thinking skills and professional skills as they tackle
complex, interdisciplinary and real-situation problems. Once the
creative ideas are generated, there are useful additional techniques
for tender ideas that will grow into a productive concept or solution.
The combination of creative skills and technical abilities will enable
the students to be ready to “hit-the-ground-running" and produce in
industry when they graduate.
Abstract: Three-dimensional geometric models have been used
to present architectural and engineering works, showing their final
configuration. When the clarification of a detail or the constitution of
a construction step in needed, these models are not appropriate. They
do not allow the observation of the construction progress of a
building. Models that could present dynamically changes of the
building geometry are a good support to the elaboration of projects.
Techniques of geometric modeling and virtual reality were used to
obtain models that could visually simulate the construction activity.
The applications explain the construction work of a cavity wall and a
bridge. These models allow the visualization of the physical
progression of the work following a planned construction sequence,
the observation of details of the form of every component of the
works and support the study of the type and method of operation of
the equipment applied in the construction. These models presented
distinct advantage as educational aids in first-degree courses in Civil
Engineering. The use of Virtual Reality techniques in the
development of educational applications brings new perspectives to
the teaching of subjects related to the field of civil construction.
Abstract: In this study, we developed an algorithm for detecting
seam cracks in a steel plate. Seam cracks are generated in the edge
region of a steel plate. We used the Gabor filter and an adaptive double
threshold method to detect them. To reduce the number of pseudo
defects, features based on the shape of seam cracks were used. To
evaluate the performance of the proposed algorithm, we tested 989
images with seam cracks and 9470 defect-free images. Experimental
results show that the proposed algorithm is suitable for detecting seam
cracks. However, it should be improved to increase the true positive
rate.
Abstract: Aim of this paper is to explore the prospect of a new approach of mobile phone banking in Libya. This study evaluates customer knowledge on commercial mobile banking in Libya. To examine the relationship between age, occupation and intention for using mobile banking for commercial purpose, a survey was conducted to gather information from one hundred Libyan bank clients. The results indicate that Libyan customers have accepted the new technology and they are ready to use it. There is no significant joint relationship between age and occupation found in intention to use mobile banking in Libya. On the other hand, the customers’ knowledge about mobile banking has a greater relationship with the intention. This study has implications for demographic researches and consumer behaviour disciplines. It also has profitable implications for banks and managers in Libya, as it will assist in better understanding of the Libyan consumers and their activities, when they develop their market strategies and new service.
Abstract: This paper aims to present a framework for the
organizational knowledge management, which seeks to deploy a
standardized structure for the integrated management of knowledge is
a common language based on domains, processes and global
indicators inspired by the COBIT framework 5 (ISACA, 2012),
which supports the integration of three technologies, enterprise
information architecture (EIA), the business process modeling (BPM)
and service-oriented architecture (SOA). The Gomak Framework is a
management platform that seeks to integrate the information
technology infrastructure, the structure of applications, information
infrastructure, and business logic and business model to support a
sound strategy of organizational knowledge management, low
process-based approach and concurrent engineering. Concurrent
engineering (CE) is a systematic approach to integrated product
development that respond to customer expectations, involving all
perspectives in parallel, from the beginning of the product life cycle.
(European Space Agency, 2000).