Abstract: Steganography is the process of hiding one file inside another such that others can neither identify the meaning of the embedded object, nor even recognize its existence. Current trends favor using digital image files as the cover file to hide another digital file that contains the secret message or information. One of the most common methods of implementation is Least Significant Bit Insertion, in which the least significant bit of every byte is altered to form the bit-string representing the embedded file. Altering the LSB will only cause minor changes in color, and thus is usually not noticeable to the human eye. While this technique works well for 24-bit color image files, steganography has not been as successful when using an 8-bit color image file, due to limitations in color variations and the use of a colormap. This paper presents the results of research investigating the combination of image compression and steganography. The technique developed starts with a 24-bit color bitmap file, then compresses the file by organizing and optimizing an 8-bit colormap. After the process of compression, a text message is hidden in the final, compressed image. Results indicate that the final technique has potential of being useful in the steganographic world.
Abstract: This paper explain about analysis and design a business directory for micro-scale businesses, small and medium enterprises (SMEs). Business Directory, if implemented will facilitate and optimize the access of SMEs to ease suppliers access to marketing. Business Directory will be equipped with the power of geocoding, so each location can be easily viewed SMEs on the map. The map will be constructed by using the functionality of a webbased Google Maps API. The information presented in the form of multimedia that can be more interesting and interactive. The method used to achieve the goal are: observation; interviews; modeling and classifying business directory for SMEs.
Abstract: The structural interpretation of a part of eastern Potwar
(Missa Keswal) has been carried out with available seismological,
seismic and well data. Seismological data contains both the source
parameters and fault plane solution (FPS) parameters and seismic data
contains ten seismic lines that were re-interpreted by using well data.
Structural interpretation depicts two broad types of fault sets namely,
thrust and back thrust faults. These faults together give rise to pop up
structures in the study area and also responsible for many structural
traps and seismicity. Seismic interpretation includes time and depth
contour maps of Chorgali Formation while seismological interpretation
includes focal mechanism solution (FMS), depth, frequency,
magnitude bar graphs and renewal of Seismotectonic map. The Focal
Mechanism Solutions (FMS) that surrounds the study area are
correlated with the different geological and structural maps of the area
for the determination of the nature of subsurface faults. Results of
structural interpretation from both seismic and seismological data
show good correlation. It is hoped that the present work will help in
better understanding of the variations in the subsurface structure and
can be a useful tool for earthquake prediction, planning of oil field and
reservoir monitoring.
Abstract: This paper demonstrates a model of an e-Learning
system based on nowadays learning theory and distant education
practice. The relationships in the model are designed to be simple
and functional and do not necessarily represent any particular e-
Learning environments. It is meant to be a generic e-Learning
system model with implications for any distant education course
instructional design. It allows online instructors to move away from
the discrepancy between the courses and body of knowledge. The
interrelationships of four primary sectors that are at the e-Learning
system are presented in this paper. This integrated model includes
[1] pedagogy, [2] technology, [3] teaching, and [4] learning. There
are interactions within each of these sectors depicted by system loop
map.
Abstract: This paper proposes a new procedure for analyzing means-end chain data in marketing research. Most commonly the collected data is summarized in the Hierarchical Value Map (HVM) illustrating the main attribute-consequence-value linkages. This paper argues that traditionally constructed HVM may give an erroneous impression of the results of a means-end study. To justify the arguments, an alternative procedure to (1) determine the dominant attribute-consequence-value linkages and (2) construct HVM in a precise manner is presented. The current approach makes a contribution to means-end analysis, allowing marketers to address a set of marketing problems, such as advertising strategy.
Abstract: Continuous measurements and multivariate methods are applied in researching the effects of energy consumption on indoor air quality (IAQ) in a Finnish one-family house. Measured data used in this study was collected continuously in a house in Kuopio, Eastern Finland, during fourteen months long period. Consumption parameters measured were the consumptions of district heat, electricity and water. Indoor parameters gathered were temperature, relative humidity (RH), the concentrations of carbon dioxide (CO2) and carbon monoxide (CO) and differential air pressure. In this study, self-organizing map (SOM) and Sammon's mapping were applied to resolve the effects of energy consumption on indoor air quality. Namely, the SOM was qualified as a suitable method having a property to summarize the multivariable dependencies into easily observable two-dimensional map. Accompanying that, the Sammon's mapping method was used to cluster pre-processed data to find similarities of the variables, expressing distances and groups in the data. The methods used were able to distinguish 7 different clusters characterizing indoor air quality and energy efficiency in the study house. The results indicate, that the cost implications in euros of heating and electricity energy vary according to the differential pressure, concentration of carbon dioxide, temperature and season.
Abstract: In this paper a one-dimension Self Organizing Map
algorithm (SOM) to perform feature selection is presented. The
algorithm is based on a first classification of the input dataset on a
similarity space. From this classification for each class a set of
positive and negative features is computed. This set of features is
selected as result of the procedure. The procedure is evaluated on an
in-house dataset from a Knowledge Discovery from Text (KDT)
application and on a set of publicly available datasets used in
international feature selection competitions. These datasets come
from KDT applications, drug discovery as well as other applications.
The knowledge of the correct classification available for the training
and validation datasets is used to optimize the parameters for positive
and negative feature extractions. The process becomes feasible for
large and sparse datasets, as the ones obtained in KDT applications,
by using both compression techniques to store the similarity matrix
and speed up techniques of the Kohonen algorithm that take
advantage of the sparsity of the input matrix. These improvements
make it feasible, by using the grid, the application of the
methodology to massive datasets.
Abstract: This paper presents a new technique for detection of
human faces within color images. The approach relies on image
segmentation based on skin color, features extracted from the two-dimensional
discrete cosine transform (DCT), and self-organizing
maps (SOM). After candidate skin regions are extracted, feature
vectors are constructed using DCT coefficients computed from those
regions. A supervised SOM training session is used to cluster feature
vectors into groups, and to assign “face" or “non-face" labels to those
clusters. Evaluation was performed using a new image database of
286 images, containing 1027 faces. After training, our detection
technique achieved a detection rate of 77.94% during subsequent
tests, with a false positive rate of 5.14%. To our knowledge, the
proposed technique is the first to combine DCT-based feature
extraction with a SOM for detecting human faces within color
images. It is also one of a few attempts to combine a feature-invariant
approach, such as color-based skin segmentation, together with
appearance-based face detection. The main advantage of the new
technique is its low computational requirements, in terms of both
processing speed and memory utilization.
Abstract: The various applications of VLSI circuits in highperformance
computing, telecommunications, and consumer
electronics has been expanding progressively, and at a very hasty
pace. This paper describes a new model for partitioning a circuit
using DBSCAN and fuzzy ARTMAP neural network. The first step
is concerned with feature extraction, where we had make use
DBSCAN algorithm. The second step is the classification and is
composed of a fuzzy ARTMAP neural network. The performance of
both approaches is compared using benchmark data provided by
MCNC standard cell placement benchmark netlists. Analysis of the
investigational results proved that the fuzzy ARTMAP with
DBSCAN model achieves greater performance then only fuzzy
ARTMAP in recognizing sub-circuits with lowest amount of
interconnections between them The recognition rate using fuzzy
ARTMAP with DBSCAN is 97.7% compared to only fuzzy
ARTMAP.
Abstract: A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Abstract: This paper proposes a novel architecture for developing decision support systems. Unlike conventional decision support systems, the proposed architecture endeavors to reveal the decision-making process such that humans' subjectivity can be incorporated into a computerized system and, at the same time, to preserve the capability of the computerized system in processing information objectively. A number of techniques used in developing the decision support system are elaborated to make the decisionmarking process transparent. These include procedures for high dimensional data visualization, pattern classification, prediction, and evolutionary computational search. An artificial data set is first employed to compare the proposed approach with other methods. A simulated handwritten data set and a real data set on liver disease diagnosis are then employed to evaluate the efficacy of the proposed approach. The results are analyzed and discussed. The potentials of the proposed architecture as a useful decision support system are demonstrated.
Abstract: Real-time hand tracking is a challenging task in many
computer vision applications such as gesture recognition. This paper
proposes a robust method for hand tracking in a complex environment
using Mean-shift analysis and Kalman filter in conjunction with 3D
depth map. The depth information solve the overlapping problem
between hands and face, which is obtained by passive stereo measuring
based on cross correlation and the known calibration data of
the cameras. Mean-shift analysis uses the gradient of Bhattacharyya
coefficient as a similarity function to derive the candidate of the hand
that is most similar to a given hand target model. And then, Kalman
filter is used to estimate the position of the hand target. The results
of hand tracking, tested on various video sequences, are robust to
changes in shape as well as partial occlusion.
Abstract: It has formed an essential issue that Climate Change, composed of highly knowledge complexity, reveals its significant impact on human existence. Therefore, specific national policies, some of which present the educational aspects, have been published for overcoming the imperative problem. Accordingly, the study aims to analyze as well as integrate the relationship between Climate Change and environmental education and apply the perspective of concept map to represent the knowledge contents and structures of Climate Change; by doing so, knowledge contents of Climate Change could be represented in an even more comprehensive way and manipulated as the tool for environmental education. The method adapted for this study is knowledge conversion model compounded of the platform for experts and teachers, who were the participants for this study, to cooperate and combine each participant-s standpoints into a complete knowledge framework that is the foundation for structuring the concept map. The result of this research contains the important concepts, the precise propositions and the entire concept map for representing the robust concepts of Climate Change.
Abstract: Modeling product configurations needs large amounts of knowledge about technical and marketing restrictions on the product. Previous attempts to automate product configurations concentrate on representations and management of the knowledge for specific domains in fixed and isolated computing environments. Since the knowledge about product configurations is subject to continuous change and hard to express, these attempts often failed to efficiently manage and exchange the knowledge in collaborative product development. In this paper, XML Topic Map (XTM) is introduced to represent and exchange the knowledge about product configurations in collaborative product development. A product configuration model based on XTM along with its merger and inference facilities enables configuration engineers in collaborative product development to manage and exchange their knowledge efficiently. A prototype implementation is also presented to demonstrate the proposed model can be applied to engineering information systems to exchange the product configuration knowledge.
Abstract: Evaporator is an important and widely used heat
exchanger in air conditioning and refrigeration industries. Different
methods have been used by investigators to increase the heat transfer
rates in evaporators. One of the passive techniques to enhance heat
transfer coefficient is the application of microfin tubes. The
mechanism of heat transfer augmentation in microfin tubes is
dependent on the flow regime of two-phase flow. Therefore many
investigations of the flow patterns for in-tube evaporation have been
reported in literatures. The gravitational force, surface tension and
the vapor-liquid interfacial shear stress are known as three dominant
factors controlling the vapor and liquid distribution inside the tube. A
review of the existing literature reveals that the previous
investigations were concerned with the two-phase flow pattern for
flow boiling in horizontal tubes [12], [9]. Therefore, the objective of
the present investigation is to obtain information about the two-phase
flow patterns for evaporation of R-134a inside horizontal smooth and
microfin tubes. Also Investigation of heat transfer during flow
boiling of R-134a inside horizontal microfin and smooth tube have
been carried out experimentally The heat transfer coefficients for
annular flow in the smooth tube is shown to agree well with Gungor
and Winterton-s correlation [4]. All the flow patterns occurred in the
test can be divided into three dominant regimes, i.e., stratified-wavy
flow, wavy-annular flow and annular flow. Experimental data are
plotted in two kinds of flow maps, i.e., Weber number for the vapor
versus weber number for the liquid flow map and mass flux versus
vapor quality flow map. The transition from wavy-annular flow to
annular or stratified-wavy flow is identified in the flow maps.
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.
Abstract: In this paper, a model for an information retrieval
system is proposed which takes into account that knowledge about
documents and information need of users are dynamic. Two
methods are combined, one qualitative or symbolic and the other
quantitative or numeric, which are deemed suitable for many
clustering contexts, data analysis, concept exploring and
knowledge discovery. These two methods may be classified as
inductive learning techniques. In this model, they are introduced to
build “long term" knowledge about past queries and concepts in a
collection of documents. The “long term" knowledge can guide
and assist the user to formulate an initial query and can be
exploited in the process of retrieving relevant information. The
different kinds of knowledge are organized in different points of
view. This may be considered an enrichment of the exploration
level which is coherent with the concept of document/query
structure.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.