Abstract: Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Abstract: The aim of this article is to narrate the utility of novel simulation approach i.e. convolution method to predict blood concentration of drug utilizing dissolution data of salbutamol sulphate microparticulate formulations with different release patterns (1:1, 1:2 and 1:3, drug:polymer). Dissolution apparatus II USP 2007 and 900 ml double distilled water stirrd at 50 rpm was employed for dissolution analysis. From dissolution data, blood drug concentration was determined, and in return predicted blood drug concentration data was used to calculate the pharmacokinetic parameters i.e. Cmax, Tmax, and AUC. Convolution is a good biwaiver technique; however its better utility needs it application in the conditions where biorelevant dissolution media are used.
Abstract: The need of high frame-rate imaging has been triggered by the new applications of ultrasound imaging to transient elastography and real-time 3D ultrasound. Using plane wave excitation (PWE) is one of the methods to achieve very high frame-rate imaging since an image can be formed with a single insonification. However, due to the lack of transmit focusing, the image quality with PWE is lower compared with those using conventional focused transmission. To solve this problem, we propose a filter-retrieved transmit focusing (FRF) technique combined with cross-correlation weighting (FRF+CC weighting) for high frame-rate imaging with PWE. A restrospective focusing filter is designed to simultaneously minimize the predefined sidelobe energy associated with single PWE and the filter energy related to the signal-to-noise-ratio (SNR). This filter attempts to maintain the mainlobe signals and to reduce the sidelobe ones, which gives similar mainlobe signals and different sidelobes between the original PWE and the FRF baseband data. Normalized cross-correlation coefficient at zero lag is calculated to quantify the degree of similarity at each imaging point and used as a weighting matrix to the FRF baseband data to further suppress sidelobes, thus improving the filter-retrieved focusing quality.
Abstract: Knowledge capabilities are increasingly important for
the innovative technology enterprises to enhance the business
performance in terms of product competitiveness, innovation and
sales. Recognition of the company capability by auditing allows them
to further pursue advancement, strategic planning and hence gain
competitive advantages. This paper attempts to develop an
Organizations- Knowledge Capabilities Assessment (OKCA) method
to assess the knowledge capabilities of technology companies. The
OKCA is a questionnaire-based assessment tool which has been
developed to uncover the impact of various knowledge capabilities on
different organizational performance. The collected data is then
analyzed to find out the crucial elements for different technological
companies. Based on the results, innovative technology enterprises are
able to recognize the direction for further improvement on business
performance and future development plan. External environmental
factors affecting organization performance can be found through the
further analysis of some selected reference companies.
Abstract: This paper describes a new approach of classification
using genetic programming. The proposed technique consists of
genetically coevolving a population of non-linear transformations on
the input data to be classified, and map them to a new space with a
reduced dimension, in order to get a maximum inter-classes
discrimination. The classification of new samples is then performed
on the transformed data, and so become much easier. Contrary to the
existing GP-classification techniques, the proposed one use a
dynamic repartition of the transformed data in separated intervals, the
efficacy of a given intervals repartition is handled by the fitness
criterion, with a maximum classes discrimination. Experiments were
first performed using the Fisher-s Iris dataset, and then, the KDD-99
Cup dataset was used to study the intrusion detection and
classification problem. Obtained results demonstrate that the
proposed genetic approach outperform the existing GP-classification
methods [1],[2] and [3], and give a very accepted results compared to
other existing techniques proposed in [4],[5],[6],[7] and [8].
Abstract: This study analyzes the effect of discretization on
classification of datasets including continuous valued features. Six
datasets from UCI which containing continuous valued features are
discretized with entropy-based discretization method. The
performance improvement between the dataset with original features
and the dataset with discretized features is compared with k-nearest
neighbors, Naive Bayes, C4.5 and CN2 data mining classification
algorithms. As the result the classification accuracies of the six
datasets are improved averagely by 1.71% to 12.31%.
Abstract: This paper focuses on the use of project work as a
pretext for applying the conventions of writing, or the correctness of
mechanics, usage, and sentence formation, in a content-based class in
a Rajabhat University. Its aim was to explore to what extent the
student teachers’ academic achievement of the basic writing features
against the 70% attainment target after the use of project is. The
organization of work around an agreed theme in which the students
reproduce language provided by texts and instructors is expected to
enhance students’ correct writing conventions. The sample of the
study comprised of 38 fourth-year English major students. The data
was collected by means of achievement test and student writing
works. The scores in the summative achievement test were analyzed
by mean score, standard deviation, and percentage. It was found that
the student teachers do more achieve of practicing mechanics and
usage, and less in sentence formation. The students benefited from
the exposure to texts during conducting the project; however, their
automaticity of how and when to form phrases and clauses into
simple/complex sentences had room for improvement.
Abstract: On-line handwritten scripts are usually dealt with pen tip traces from pen-down to pen-up positions. Time evaluation of the pen coordinates is also considered along with trajectory information. However, the data obtained needs a lot of preprocessing including filtering, smoothing, slant removing and size normalization before recognition process. Instead of doing such lengthy preprocessing, this paper presents a simple approach to extract the useful character information. This work evaluates the use of the counter- propagation neural network (CPN) and presents feature extraction mechanism in full detail to work with on-line handwriting recognition. The obtained recognition rates were 60% to 94% using the CPN for different sets of character samples. This paper also describes a performance study in which a recognition mechanism with multiple thresholds is evaluated for counter-propagation architecture. The results indicate that the application of multiple thresholds has significant effect on recognition mechanism. The method is applicable for off-line character recognition as well. The technique is tested for upper-case English alphabets for a number of different styles from different peoples.
Abstract: Due to the legacy of apartheid segregation South Africa remains a divided society where most voters live in politically homogenous social environments. This paper argues that political discussion within one’s social context plays a primary role in shaping political attitudes and vote choice. Using data from the Comparative National Elections Project 2004 and 2009 South African post-election surveys, the paper explores the extent of social context partisan homogeneity in South Africa and finds that voters are not overly embedded in homogenous social contexts. It then demonstrates the consequences of partisan homogeneity on voting behavior. Homogenous social contexts tend to encourage stronger partisan loyalties and fewer defections in vote choice while voters in more heterogeneous contexts show less consistency in their attitudes and behaviour. Finally, the analysis shows how momentous sociopolitical events at the time of a particular election can change the social context, with important consequences for electoral outcomes.
Abstract: Nowadays predicting political risk level of country
has become a critical issue for investors who intend to achieve
accurate information concerning stability of the business
environments. Since, most of the times investors are layman and
nonprofessional IT personnel; this paper aims to propose a
framework named GECR in order to help nonexpert persons to
discover political risk stability across time based on the political
news and events.
To achieve this goal, the Bayesian Networks approach was
utilized for 186 political news of Pakistan as sample dataset.
Bayesian Networks as an artificial intelligence approach has been
employed in presented framework, since this is a powerful technique
that can be applied to model uncertain domains. The results showed
that our framework along with Bayesian Networks as decision
support tool, predicted the political risk level with a high degree of
accuracy.
Abstract: Realistic 3D face model is more precise in representing
pose, illumination, and expression of face than 2D face model so that it
can be utilized usefully in various applications such as face recognition,
games, avatars, animations, and etc.
In this paper, we propose a 3D face modeling method based on 3D
dense morphable shape model. The proposed 3D modeling method
first constructs a 3D dense morphable shape model from 3D face scan
data obtained using a 3D scanner. Next, the proposed method extracts
and matches facial landmarks from 2D image sequence containing a
face to be modeled, and then reconstructs 3D vertices coordinates of
the landmarks using a factorization-based SfM technique. Then, the
proposed method obtains a 3D dense shape model of the face to be
modeled by fitting the constructed 3D dense morphable shape model
into the reconstructed 3D vertices. Also, the proposed method makes a
cylindrical texture map using 2D face image sequence. Finally, the
proposed method generates a 3D face model by rendering the 3D dense
face shape model using the cylindrical texture map. Through building
processes of 3D face model by the proposed method, it is shown that
the proposed method is relatively easy, fast and precise.
Abstract: In this paper, an analysis of a target location estimation
system using the best linear unbiased estimator (BLUE) for high
performance radar systems is presented. In synthetic environments,
we are here concerned with three key elements of radar system
modeling, which makes radar systems operates accurately in strategic
situation in virtual ground. Radar Cross Section (RCS) modeling
is used to determine the actual amount of electromagnetic waves
that are reflected from a tactical object. Pattern Propagation Factor
(PPF) is an attenuation coefficient of the radar equation that contains
the reflection from the surface of the earth, the diffraction, the
refraction and scattering by the atmospheric environment. Clutter is
the unwanted echoes of electronic systems. For the data fusion of
output results from radar detection in synthetic environment, BLUE
is used and compared with the mean values of each simulation results.
Simulation results demonstrate the performance of the radar system.
Abstract: In this paper we introduce new data oriented modeling
of uniform random variable well-matched with computing systems. Due to this conformity with current computers structure, this modeling will be efficiently used in statistical inference.
Abstract: Facing the concern of the population to its environment and to climatic change, city planners are now considering the urban climate in their choices of planning. The urban climate, representing different urban morphologies across central Bangkok metropolitan area (BMA), are used to investigates the effects of both the composition and configuration of variables of urban morphology indicators on the summer diurnal range of urban climate, using correlation analyses and multiple linear regressions. Results show first indicate that approximately 92.6% of the variation in the average maximum daytime near-surface air temperature (Ta) was explained jointly by the two composition variables of urban morphology indicators including open space ratio (OSR) and floor area ratio (FAR). It has been possible to determine the membership of sample areas to the local climate zones (LCZs) using these urban morphology descriptors automatically computed with GIS and remote sensed data. Finally result found the temperature differences among zones of large separation, such as the city center could be respectively from 35.48±1.04ºC (Mean±S.D.) warmer than the outskirt of Bangkok on average for maximum daytime near surface temperature to 28.27±0.21ºC for extreme event and, can exceed as 8ºC. A spatially disaggregation of urban thermal responsiveness map would be helpful for several reasons. First, it would localize urban areas concerned by different climate behavior over summer daytime and be a good indicator of urban climate variability. Second, when overlaid with a land cover map, this map may contribute to identify possible urban management strategies to reduce heat wave effects in BMA.
Abstract: Plackett-Burman statistical screening of media
constituents and operational conditions for extracellular lipase
production from isolate Trichoderma viride has been carried out in
submerged fermentation. This statistical design is used in the early
stages of experimentation to screen out unimportant factors from a
large number of possible factors. This design involves screening of
up to 'n-1' variables in just 'n' number of experiments. Regression
coefficients and t-values were calculated by subjecting the
experimental data to statistical analysis using Minitab version 15.
The effects of nine process variables were studied in twelve
experimental trials. Maximum lipase activity of 7.83 μmol /ml /min
was obtained in the 6th trail. Pareto chart illustrates the order of
significance of the variables affecting the lipase production. The
present study concludes that the most significant variables affecting
lipase production were found to be palm oil, yeast extract, K2HPO4,
MgSO4 and CaCl2.
Abstract: The paper presents a multimodal approach for biometric authentication, based on multiple classifiers. The proposed solution uses a post-classification biometric fusion method in which the biometric data classifiers outputs are combined in order to improve the overall biometric system performance by decreasing the classification error rates. The paper shows also the biometric recognition task improvement by means of a carefully feature selection, as much as not all of the feature vectors components support the accuracy improvement.
Abstract: Needs of an efficient information retrieval in recent
years in increased more then ever because of the frequent use of
digital information in our life. We see a lot of work in the area of
textual information but in multimedia information, we cannot find
much progress. In text based information, new technology of data
mining and data marts are now in working that were started from the
basic concept of database some where in 1960.
In image search and especially in image identification,
computerized system at very initial stages. Even in the area of image
search we cannot see much progress as in the case of text based
search techniques. One main reason for this is the wide spread roots
of image search where many area like artificial intelligence,
statistics, image processing, pattern recognition play their role. Even
human psychology and perception and cultural diversity also have
their share for the design of a good and efficient image recognition
and retrieval system.
A new object based search technique is presented in this paper
where object in the image are identified on the basis of their
geometrical shapes and other features like color and texture where
object-co-relation augments this search process.
To be more focused on objects identification, simple images are
selected for the work to reduce the role of segmentation in overall
process however same technique can also be applied for other
images.
Abstract: Because of its global reach, reduction of time
restraints, and ability to reduce costs and increase sales, use of the
Internet, the World Wide Web (WWW), and related technologies
can be a competitive tool in the arsenal of small and medium-sized
enterprises (SMEs). Countries the world over are interested in the
successful adoption of the Internet by SMEs. Because a vast
majority of jobs come from that sector, greater financial success of
SMEs translates into greater job growth and, subsequently, higher
tax revenue to the government. This research investigated the level
of Internet usage for business solutions by small and medium
enterprises in Jordan. Through the survey of a random sample of
100 firms with less than 500 employees and from data obtained
from this survey that formed the basis for our study, we found that
a majority of respondents use the Internet in business activities ,
the adoption of the Internet as a business tool is limited to a
brochure where Web site which primarily provides one way. As
such, there wasn't interactive information about the company and
its products and services.
Abstract: The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for crossdomain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Abstract: We assert here that there might be some factors that
influence professional identity construction at the university/higher
education stage. In accord, we propose a conceptual framework of
intervening factors in professional identity construction at university
from a literature review and preliminary data from a qualitative pilot
study using focus groups. This model identifies several factors that
might influence university students- professional identity
construction and group them into categories. In turn, we describe
how these factors might contribute in strengthening or weakening
their professional identity. Finally, we discuss the implications of
strengthening students- PI for the university, individuals and
organizations and we provide a roadmap for future empirical work in
this area.