Abstract: This paper presents a formalisation of the different existing code mutation techniques (polymorphism and metamorphism) by means of formal grammars. While very few theoretical results are known about the detection complexity of viral mutation techniques, we exhaustively address this critical issue by considering the Chomsky classification of formal grammars. This enables us to determine which family of code mutation techniques are likely to be detected or on the contrary are bound to remain undetected. As an illustration we then present, on a formal basis, a proof-of-concept metamorphic mutation engine denoted PB MOT, whose detection has been proven to be undecidable.
Abstract: The daily increase of organic waste materials resulting
from different activities in the country is one of the main factors for
the pollution of environment. Today, with regard to the low level of
the output of using traditional methods, the high cost of disposal
waste materials and environmental pollutions, the use of modern
methods such as anaerobic digestion for the production of biogas has
been prevailing. The collected biogas from the process of anaerobic
digestion, as a renewable energy source similar to natural gas but
with a less methane and heating value is usable. Today, with the help
of technologies of filtration and proper preparation, access to biogas
with features fully similar to natural gas has become possible. At
present biogas is one of the main sources of supplying electrical and
thermal energy and also an appropriate option to be used in four
stroke engine, diesel engine, sterling engine, gas turbine, gas micro
turbine and fuel cell to produce electricity. The use of biogas for
different reasons which returns to socio-economic and environmental
advantages has been noticed in CHP for the production of energy in
the world. The production of biogas from the technology of anaerobic
digestion and its application in CHP power plants in Iran can not only
supply part of the energy demands in the country, but it can
materialize moving in line with the sustainable development. In this
article, the necessity of the development of CHP plants with biogas
fuels in the country will be dealt based on studies performed from the
economic, environmental and social aspects. Also to prove the
importance of the establishment of these kinds of power plants from
the economic point of view, necessary calculations has been done as
a case study for a CHP power plant with a biogas fuel.
Abstract: In recent years, a new numerical method has been
developed, the extended finite element method (X-FEM). The
objective of this work is to exploit the (X-FEM) for the treatment of
the fracture mechanics problems on 3D geometries, where we
showed the ability of this method to simulate the fatigue crack
growth into two cases: edge and central crack. In the results we
compared the six first natural frequencies of mode shapes uncracking
with the cracking initiation in the structure, and showed the stress
intensity factor (SIF) evolution function as crack size propagation
into structure, the analytical validation of (SIF) is presented. For to
evidence the aspects of this method, all result is compared between
FEA and X-FEM.
Abstract: This method decrease usage power (expenditure) in networks on chips (NOC). This method data coding for data transferring in order to reduces expenditure. This method uses data compression reduces the size. Expenditure calculation in NOC occurs inside of NOC based on grown models and transitive activities in entry ports. The goal of simulating is to weigh expenditure for encoding, decoding and compressing in Baseline networks and reduction of switches in this type of networks. KeywordsNetworks on chip, Compression, Encoding, Baseline networks, Banyan networks.
Abstract: Knowledge sharing enables the information or
knowledge to be transmitted from one source to another. This paper
demonstrates the needs of having the online book catalogue which
can be used to facilitate disseminating information on textbook used
in the university. This project is aimed to give access to the students
and lecturers to the list of books in the bookstore and at the same
time to allow book reviewing without having to visit the bookstore
physically. Research is carried out according to the boundaries which
accounts to current process of new book purchasing, current system
used by the bookstore and current process the lecturers go through
for reviewing textbooks. The questionnaire is used to gather the
requirements and it is distributed to 100 students and 40 lecturers.
This project has enabled the improvement of a manual process to be
carried out automatically, through a web based platform. It is shown
based on the user acceptance survey carried out that target groups
found that this web service is feasible to be implemented in
Universiti Teknologi PETRONAS (UTP), and they have shown
positive signs of interest in utilizing it in the future.
Abstract: In this paper, we propose a texture feature-based
language identification using wavelet-domain BDIP (block difference
of inverse probabilities) and BVLC (block variance of local
correlation coefficients) features and FFT (fast Fourier transform)
feature. In the proposed method, wavelet subbands are first obtained
by wavelet transform from a test image and denoised by Donoho-s
soft-thresholding. BDIP and BVLC operators are next applied to the
wavelet subbands. FFT blocks are also obtained by 2D (twodimensional)
FFT from the blocks into which the test image is
partitioned. Some significant FFT coefficients in each block are
selected and magnitude operator is applied to them. Moments for each
subband of BDIP and BVLC and for each magnitude of significant
FFT coefficients are then computed and fused into a feature vector. In
classification, a stabilized Bayesian classifier, which adopts variance
thresholding, searches the training feature vector most similar to the
test feature vector. Experimental results show that the proposed
method with the three operations yields excellent language
identification even with rather low feature dimension.
Abstract: Today-s Voltage Regulator Modules (VRMs) face increasing design challenges as the number of transistors in microprocessors increases per Moore-s Law. These challenges have recently become even more demanding as microprocessors operate at sub voltage range at significantly high current. This paper presents a new multiphase topology with cell configuration for improved performance in low voltage and high current applications. A lab scale hardware prototype of the new topology was design and constructed. Laboratory tests were performed on the proposed converter and compared with a commercially available VRM. Results from the proposed topology exhibit improved performance compared to the commercially available counterpart.
Abstract: This research were investigated, determined, and
analyzed of the climate characteristically change in the provincial
Udon Thani in the period of 60 surrounding years from 1951 to 2010
A.D. that it-s transferred to effects of climatologically data for
determining global warming. Statistically significant were not found
for the 60 years- data (R2
Abstract: This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.
Abstract: The main issue in sweetening natural gas is H2S
dissociation. The present study is concerned with simulating thermal
dissociation of H2S in industrial natural gas carbon black furnace.
The comparison of calculated results against experimental
measurements shows good agreement. The results show that sulfur
derived from H2S thermal dissociation peaked at φ=0.95. H2S
thermal dissociation is enhanced in equivalence ratio upper than 1
and H2S oxidization is increased in equivalence ratio lower than 1.
H2 concentration of H2S thermal dissociation is increased with
increase of equivalence ratio up to 1. Also, H2S concentration
decreased in outlet as equivalence ratio increases. H2S thermal
dissociation to hydrogen and Sulfur reduces its toxic characteristics
and make economical benefits.
Abstract: The majority of existing predictors for time series are
model-dependent and therefore require some prior knowledge for the
identification of complex systems, usually involving system
identification, extensive training, or online adaptation in the case of
time-varying systems. Additionally, since a time series is usually
generated by complex processes such as the stock market or other
chaotic systems, identification, modeling or the online updating of
parameters can be problematic. In this paper a model-free predictor
(MFP) for a time series produced by an unknown nonlinear system or
process is derived using tracking theory. An identical derivation of the
MFP using the property of the Newton form of the interpolating
polynomial is also presented. The MFP is able to accurately predict
future values of a time series, is stable, has few tuning parameters and
is desirable for engineering applications due to its simplicity, fast
prediction speed and extremely low computational load. The
performance of the proposed MFP is demonstrated using the
prediction of the Dow Jones Industrial Average stock index.
Abstract: Nanomaterials have attracted considerable attention
during the last two decades, due to their unusual electrical, mechanical
and other physical properties as compared with their bulky
counterparts. The mechanical properties of nanostructured materials
show strong size dependency, which has been explained within the
framework of continuum mechanics by including the effects of surface
stress. The size-dependent deformations of two-dimensional
nanosized structures with surface effects are investigated in the paper
by the finite element method. Truss element is used to evaluate the
contribution of surface stress to the total potential energy and the
Gurtin and Murdoch surface stress model is implemented with
ANSYS through its user programmable features. The proposed
approach is used to investigate size-dependent stress concentration
around a nanosized circular hole and the size-dependent effective
moduli of nanoporous materials. Numerical results are compared with
available analytical results to validate the proposed modeling
approach.
Abstract: The main goal of this work is to propose a way for
combined use of two nontraditional algorithms by solving topological
problems on telecommunications concentrator networks. The
algorithms suggested are the Simulated Annealing algorithm and the
Genetic Algorithm. The Algorithm of Simulated Annealing unifies
the well known local search algorithms. In addition - Simulated
Annealing allows acceptation of moves in the search space witch lead
to decisions with higher cost in order to attempt to overcome any
local minima obtained. The Genetic Algorithm is a heuristic approach
witch is being used in wide areas of optimization works. In the last
years this approach is also widely implemented in
Telecommunications Networks Planning. In order to solve less or
more complex planning problem it is important to find the most
appropriate parameters for initializing the function of the algorithm.
Abstract: Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Abstract: According to conjugate gradient algorithm, a new consensus protocol algorithm of discrete-time multi-agent systems is presented, which can achieve finite-time consensus. Finally, a numerical example is given to illustrate our theoretical result.
Abstract: The aim of this article is to narrate the utility of novel simulation approach i.e. convolution method to predict blood concentration of drug utilizing dissolution data of salbutamol sulphate microparticulate formulations with different release patterns (1:1, 1:2 and 1:3, drug:polymer). Dissolution apparatus II USP 2007 and 900 ml double distilled water stirrd at 50 rpm was employed for dissolution analysis. From dissolution data, blood drug concentration was determined, and in return predicted blood drug concentration data was used to calculate the pharmacokinetic parameters i.e. Cmax, Tmax, and AUC. Convolution is a good biwaiver technique; however its better utility needs it application in the conditions where biorelevant dissolution media are used.
Abstract: The need of high frame-rate imaging has been triggered by the new applications of ultrasound imaging to transient elastography and real-time 3D ultrasound. Using plane wave excitation (PWE) is one of the methods to achieve very high frame-rate imaging since an image can be formed with a single insonification. However, due to the lack of transmit focusing, the image quality with PWE is lower compared with those using conventional focused transmission. To solve this problem, we propose a filter-retrieved transmit focusing (FRF) technique combined with cross-correlation weighting (FRF+CC weighting) for high frame-rate imaging with PWE. A restrospective focusing filter is designed to simultaneously minimize the predefined sidelobe energy associated with single PWE and the filter energy related to the signal-to-noise-ratio (SNR). This filter attempts to maintain the mainlobe signals and to reduce the sidelobe ones, which gives similar mainlobe signals and different sidelobes between the original PWE and the FRF baseband data. Normalized cross-correlation coefficient at zero lag is calculated to quantify the degree of similarity at each imaging point and used as a weighting matrix to the FRF baseband data to further suppress sidelobes, thus improving the filter-retrieved focusing quality.
Abstract: Knowledge capabilities are increasingly important for
the innovative technology enterprises to enhance the business
performance in terms of product competitiveness, innovation and
sales. Recognition of the company capability by auditing allows them
to further pursue advancement, strategic planning and hence gain
competitive advantages. This paper attempts to develop an
Organizations- Knowledge Capabilities Assessment (OKCA) method
to assess the knowledge capabilities of technology companies. The
OKCA is a questionnaire-based assessment tool which has been
developed to uncover the impact of various knowledge capabilities on
different organizational performance. The collected data is then
analyzed to find out the crucial elements for different technological
companies. Based on the results, innovative technology enterprises are
able to recognize the direction for further improvement on business
performance and future development plan. External environmental
factors affecting organization performance can be found through the
further analysis of some selected reference companies.
Abstract: 20 years of dentistry was a period of transition from
communist to market economy but Romanian doctors have
insufficient management knowledge. Recently, the need for modern
management has increased due to technologies and superior materials
appearance, as patient-s demands.
Research goal is to increase efficiency by evaluating dental
medical office cost categories in real pricing procedures.
Empirical research is based on guided study that includes
information about the association between categories of cost
perception and therapeutic procedures commonly used in dental
offices.
Due to the obtained results to identify all the labours that make up
a settled procedure costs were determined for each procedure.
Financial evaluation software was created with the main functions:
introducing and maintaining patient records, treatment and
appointments made, procedures cost and monitoring office
productivity.
We believe that the study results can significantly improve the
financial management of dental offices, increasing the effectiveness
and quality of services.
Abstract: Components of a software system may be related in a
wide variety of ways. These relationships need to be represented in
software architecture in order develop quality software. In practice, software architecture is immensely challenging, strikingly
multifaceted, extravagantly domain based, perpetually changing,
rarely cost-effective, and deceptively ambiguous. This paper analyses
relations among the major components of software systems and
argues for using several broad categories for software architecture for
assessment purposes: strongly adequate, weakly adequate and
functionally adequate software architectures among other categories.
These categories are intended for formative assessments of
architectural designs.