Abstract: Basel III (or the Third Basel Accord) is a global
regulatory standard on bank capital adequacy, stress testing and
market liquidity risk agreed upon by the members of the Basel
Committee on Banking Supervision in 2010-2011, and scheduled to
be introduced from 2013 until 2018. Basel III is a comprehensive set
of reform measures. These measures aim to; (1) improve the banking
sector-s ability to absorb shocks arising from financial and economic
stress, whatever the source, (2) improve risk management and
governance, (3) strengthen banks- transparency and disclosures.
Similarly the reform target; (1) bank level or micro-prudential,
regulation, which will help raise the resilience of individual banking
institutions to periods of stress. (2) Macro-prudential regulations,
system wide risk that can build up across the banking sector as well
as the pro-cyclical implication of these risks over time. These two
approaches to supervision are complementary as greater resilience at
the individual bank level reduces the risk system wide shocks.
Macroeconomic impact of Basel III; OECD estimates that the
medium-term impact of Basel III implementation on GDP growth is
in the range -0,05 percent to -0,15 percent per year. On the other hand
economic output is mainly affected by an increase in bank lending
spreads as banks pass a rise in banking funding costs, due to higher
capital requirements, to their customers. Consequently the estimated
effects on GDP growth assume no active response from monetary
policy. Basel III impact on economic output could be offset by a
reduction (or delayed increase) in monetary policy rates by about 30
to 80 basis points. The aim of this paper is to create a framework
based on the recent regulations in order to prevent financial crises.
Thus the need to overcome the global financial crisis will contribute
to financial crises that may occur in the future periods. In the first
part of the paper, the effects of the global crisis on the banking
system examine the concept of financial regulations. In the second
part; especially in the financial regulations and Basel III are analyzed.
The last section in this paper explored the possible consequences of
the macroeconomic impacts of Basel III.
Abstract: Tacit knowledge has been one of the most discussed
and contradictory concepts in the field of knowledge management
since the mid 1990s. The concept is used relatively vaguely to refer
to any type of information that is difficult to articulate, which has led
to discussions about the original meaning of the concept (adopted
from Polanyi-s philosophy) and the nature of tacit knowing. It is
proposed that the subject should be approached from the perspective
of cognitive science in order to connect tacit knowledge to
empirically studied cognitive phenomena. Some of the most
important examples of tacit knowing presented by Polanyi are
analyzed in order to trace the cognitive mechanisms of tacit knowing
and to promote better understanding of the nature of tacit knowledge.
The cognitive approach to Polanyi-s theory reveals that the
tacit/explicit typology of knowledge often presented in the
knowledge management literature is not only artificial but totally
opposite approach compared to Polanyi-s thinking.
Abstract: The present investigation is concerned with
sub-impacts taken placed when a rigid hemispherical-head block
transversely impacts against a beam at different locations. Dynamic
substructure technique for elastic-plastic impact is applied to solve
numerically this problem. The time history of impact force and energy
exchange between block and beam are obtained. The process of
sub-impacts is analyzed from the energy exchange point of view. The
results verify the influences of the impact location on impact duration,
the first sub-impact and energy exchange between the beam and the
block.
Abstract: Rapid process of urbanism development has increased
the demand for some infrastructures such as supplying potable water,
electricity network and transportation facilities and etc. Nonefficiency
of the existing system with parallel managements of urban
traffic management has increased the gap between supply and
demand of traffic facilities. A sustainable transport system requires
some activities more important than air pollution control, traffic or
fuel consumption reduction and the studies show that there is no
unique solution for solving complicated transportation problems and
solving such a problem needs a comprehensive, dynamic and reliable
mechanism. Sustainable transport management considers the effects
of transportation development on economic efficiency, environmental
issues, resources consumption, land use and social justice and helps
reduction of environmental effects, increase of transportation system
efficiency as well as improvement of social life and aims to enhance
efficiency, goods transportation, provide services with minimum
access problems that cannot be realized without reorganization of
strategies, policies and plans.
Abstract: This paper presents the use of a newly created network
structure known as a Self-Delaying Dynamic Network (SDN) to
create a high resolution image from a set of time stepped input
frames. These SDNs are non-recurrent temporal neural networks
which can process time sampled data. SDNs can store input data
for a lifecycle and feature dynamic logic based connections between
layers. Several low resolution images and one high resolution image
of a scene were presented to the SDN during training by a Genetic
Algorithm. The SDN was trained to process the input frames in order
to recreate the high resolution image. The trained SDN was then used
to enhance a number of unseen noisy image sets. The quality of high
resolution images produced by the SDN is compared to that of high
resolution images generated using Bi-Cubic interpolation. The SDN
produced images are superior in several ways to the images produced
using Bi-Cubic interpolation.
Abstract: Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.
Abstract: Comparison of two approaches for the simulation of
the dynamic behaviour of a permanent magnet linear actuator is
presented. These are full coupled model, where the electromagnetic
field, electric circuit and mechanical motion problems are solved
simultaneously, and decoupled model, where first a set of static
magnetic filed analysis is carried out and then the electric circuit and
mechanical motion equations are solved employing bi-cubic spline
approximations of the field analysis results. The results show that the
proposed decoupled model is of satisfactory accuracy and gives more
flexibility when the actuator response is required to be estimated for
different external conditions, e.g. external circuit parameters or
mechanical loads.
Abstract: In this empirical research, how marketing managers evaluate their firms- performances and decide to make innovation is examined. They use some standards which are past performance of the firm, target performance of the firm, competitor performance, and average performance of the industry to compare and evaluate the firms- performances. It is hypothesized that marketing managers and owners of the firm compare the firms- current performance with these four standards at the same time to decide when to make innovation relating to any aspects of the firm, either management style or products. Relationship between the comparison of the firm-s performance with these standards and innovation are searched in the same regression model. The results of the regression analysis are discussed and some recommendations are made for future studies and applicants.
Abstract: The objectives of this research are to produce
prototype coconut oil based solvent offset printing inks and to
analyze a basic quality of printing work derived from coconut oil
based solvent offset printing inks, by mean of bringing coconut oil
for producing varnish and bringing such varnish to produce black
offset printing inks. Then, analysis of qualities i.e. CIELAB value,
density value, and dot gain value of printing work from coconut oil
based solvent offset printing inks which printed on gloss-coated
woodfree paper weighs 130 grams were done. The research result of
coconut oil based solvent offset printing inks indicated that the
suitable varnish formulation is using 51% of coconut oil, 36% of
phenolic resin, and 14% of solvent oil 14%, while the result of
producing black offset ink displayed that the suitable formula of
printing ink is using varnish mixed with 20% of coconut oil, and the
analyzing printing work of coconut oil based solvent offset printing
inks which printed on paper, the results were as follows: CIELAB
value of black offset printing ink is at L* = 31.90, a* = 0.27, and b* =
1.86, density value is at 1.27 and dot gain value was high at mid tone
area of image area.
Abstract: Requirements are critical to system validation as they guide all subsequent stages of systems development. Inadequately specified requirements generate systems that require major revisions or cause system failure entirely. Use Cases have become the main vehicle for requirements capture in many current Object Oriented (OO) development methodologies, and a means for developers to communicate with different stakeholders. In this paper we present the results of a laboratory experiment that explored whether different types of use case format are equally effective in facilitating high knowledge user-s understanding. Results showed that the provision of diagrams along with the textual use case descriptions significantly improved user comprehension of system requirements in both familiar and unfamiliar application domains. However, when comparing groups that received models of textual description accompanied with diagrams of different level of details (simple and detailed) we found no significant difference in performance.
Abstract: In this note, we investigate the blind source separability of linear FIR-MIMO systems. The concept of semi-reversibility of a system is presented. It is shown that for a semi-reversible system, if the input signals belong to a binary alphabet, then the source data can be blindly separated. One sufficient condition for a system to be semi-reversible is obtained. It is also shown that the proposed criteria is weaker than that in the literature which requires that the channel matrix is irreducible/invertible or reversible.
Abstract: Falling has been one of the major concerns and threats
to the independence of the elderly in their daily lives. With the
worldwide significant growth of the aging population, it is essential
to have a promising solution of fall detection which is able to operate
at high accuracy in real-time and supports large scale implementation
using multiple cameras. Field Programmable Gate Array (FPGA) is a
highly promising tool to be used as a hardware accelerator in many
emerging embedded vision based system. Thus, it is the main
objective of this paper to present an FPGA-based solution of visual
based fall detection to meet stringent real-time requirements with
high accuracy. The hardware architecture of visual based fall
detection which utilizes the pixel locality to reduce memory accesses
is proposed. By exploiting the parallel and pipeline architecture of
FPGA, our hardware implementation of visual based fall detection
using FGPA is able to achieve a performance of 60fps for a series of
video analytical functions at VGA resolutions (640x480). The results
of this work show that FPGA has great potentials and impacts in
enabling large scale vision system in the future healthcare industry
due to its flexibility and scalability.
Abstract: With the development of Internet and databases application techniques, the demand that lots of databases in the Internet are permitted to remote query and access for authorized users becomes common, and the problem that how to protect the copyright of relational databases arises. This paper simply introduces the knowledge of cloud model firstly, includes cloud generators and similar cloud. And then combined with the property of the cloud, a method of protecting relational databases copyright with cloud watermark is proposed according to the idea of digital watermark and the property of relational databases. Meanwhile, the corresponding watermark algorithms such as cloud watermark embedding algorithm and detection algorithm are proposed. Then, some experiments are run and the results are analyzed to validate the correctness and feasibility of the watermark scheme. In the end, the foreground of watermarking relational database and its research direction are prospected.
Abstract: In this manuscript, a wavelet-based blind
watermarking scheme has been proposed as a means to provide
security to authenticity of a fingerprint. The information used for
identification or verification of a fingerprint mainly lies in its
minutiae. By robust watermarking of the minutiae in the fingerprint
image itself, the useful information can be extracted accurately even
if the fingerprint is severely degraded. The minutiae are converted in
a binary watermark and embedding these watermarks in the detail
regions increases the robustness of watermarking, at little to no
additional impact on image quality. It has been experimentally shown
that when the minutiae is embedded into wavelet detail coefficients
of a fingerprint image in spread spectrum fashion using a
pseudorandom sequence, the robustness is observed to have a
proportional response while perceptual invisibility has an inversely
proportional response to amplification factor “K". The DWT-based
technique has been found to be very robust against noises,
geometrical distortions filtering and JPEG compression attacks and is
also found to give remarkably better performance than DCT-based
technique in terms of correlation coefficient and number of erroneous
minutiae.
Abstract: In this paper, we propose a modified version of the
Constant Modulus Algorithm (CMA) tailored for blind Decision
Feedback Equalizer (DFE) of first order Markovian time varying
channels. The proposed NonStationary CMA (NSCMA) is designed
so that it explicitly takes into account the Markovian structure of
the channel nonstationarity. Hence, unlike the classical CMA, the
NSCMA is not blind with respect to the channel time variations.
This greatly helps the equalizer in the case of realistic channels, and
avoids frequent transmissions of training sequences.
This paper develops a theoretical analysis of the steady state
performance of the CMA and the NSCMA for DFEs within a time
varying context. Therefore, approximate expressions of the mean
square errors are derived. We prove that in the steady state, the
NSCMA exhibits better performance than the classical CMA. These
new results are confirmed by simulation.
Through an experimental study, we demonstrate that the Bit Error
Rate (BER) is reduced by the NSCMA-DFE, and the improvement
of the BER achieved by the NSCMA-DFE is as significant as the
channel time variations are severe.
Abstract: There is a real threat on the VIPs personal pages on
the Social Network Sites (SNS). The real threats to these pages is
violation of privacy and theft of identity through creating fake pages
that exploit their names and pictures to attract the victims and spread
of lies. In this paper, we propose a new secure architecture that
improves the trusting and finds an effective solution to reduce fake
pages and possibility of recognizing VIP pages on SNS. The
proposed architecture works as a third party that is added to
Facebook to provide the trust service to personal pages for VIPs.
Through this mechanism, it works to ensure the real identity of the
applicant through the electronic authentication of personal
information by storing this information within content of their
website. As a result, the significance of the proposed architecture is
that it secures and provides trust to the VIPs personal pages.
Furthermore, it can help to discover fake page, protect the privacy,
reduce crimes of personality-theft, and increase the sense of trust and
satisfaction by friends and admirers in interacting with SNS.
Abstract: This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA
datasets. An important challenge in reliably detecting aortic is the
need to overcome problems associated with intensity
inhomogeneities. Level sets are part of an important class of methods
that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the
level set formulation aids the suppression of noise in the extracted
regions of interest and then guides the motion of the evolving contour
for the detection of weak boundaries. The speed of curve evolution
has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level
sets, and are shown to be more effective than other approaches in
coping with intensity inhomogeneities. We have applied the Courant
Friedrichs Levy (CFL) condition as stability criterion for our algorithm.
Abstract: In this paper, a mathematical model of human immunodeficiency
virus (HIV) is utilized and an optimization problem is
proposed, with the final goal of implementing an optimal 900-day
structured treatment interruption (STI) protocol. Two type of commonly
used drugs in highly active antiretroviral therapy (HAART),
reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are
considered. In order to solving the proposed optimization problem an
adaptive memetic algorithm with population management (AMAPM)
is proposed. The AMAPM uses a distance measure to control the
diversity of population in genotype space and thus preventing the
stagnation and premature convergence. Moreover, the AMAPM uses
diversity parameter in phenotype space to dynamically set the population
size and the number of crossovers during the search process.
Three crossover operators diversify the population, simultaneously.
The progresses of crossover operators are utilized to set the number
of each crossover per generation. In order to escaping the local optima
and introducing the new search directions toward the global optima,
two local searchers assist the evolutionary process. In contrast to
traditional memetic algorithms, the activation of these local searchers
is not random and depends on both the diversity parameters in
genotype space and phenotype space. The capability of AMAPM in
finding optimal solutions compared with three popular metaheurestics
is introduced.
Abstract: This paper presents a boarding on biometric
authentication through the Keystrokes Dynamics that it intends to
identify a person from its habitual rhythm to type in conventional
keyboard. Seven done experiments: verifying amount of prototypes,
threshold, features and the variation of the choice of the times of the
features vector. The results show that the use of the Keystroke
Dynamics is simple and efficient for personal authentication, getting
optimum resulted using 90% of the features with 4.44% FRR and 0%
FAR.
Abstract: In the present work an investigation of the effects of
the air frontal velocity, relative humidity and dry air temperature on
the heat transfer characteristics of plain finned tube evaporator has
been conducted. Using an appropriate correlation for the air side heat
transfer coefficient the temperature distribution along the fin surface
was calculated using a dimensionless temperature distribution. For a
constant relative humidity and bulb temperature, it is found that the
temperature distribution decreases with increasing air frontal
velocity. Apparently, it is attributed to the condensate water film
flowing over the fin surface. When dry air temperature and face
velocity are being kept constant, the temperature distribution
decreases with the increase of inlet relative humidity. An increase in
the inlet relative humidity is accompanied by a higher amount of
moisture on the fin surface. This results in a higher amount of latent
heat transfer which involves higher fin surface temperature. For the
influence of dry air temperature, the results here show an increase in
the dimensionless temperature parameter with a decrease in bulb
temperature. Increasing bulb temperature leads to higher amount of
sensible and latent heat transfer when other conditions remain
constant.