Abstract: Biodiversity crisis is one of the many crises that
started at the turn of the millennia. Concrete form of expression is
still disputed, but there is a relatively high consensus regarding the
high rate of degradation and the urgent need for action. The strategy
of action outlines a strong economic component, together with the
recognition of market mechanisms as the most effective policies to
protect biodiversity. In this context, biodiversity and ecosystem
services are natural assets that play a key role in economic strategies
and technological development to promote development and
prosperity. Developing and strengthening policies for transition to an
economy based on efficient use of resources is the way forward.
To emphasize the co-viability specific to the connection economyecosystem
services, scientific approach aimed on one hand how to
implement policies for nature conservation and on the other hand, the
concepts underlying the economic expression of ecosystem services-
value, in the context of current technology. Following the analysis of
business opportunities associated with changes in ecosystem services
was concluded that development of market mechanisms for nature
conservation is a trend that is increasingly stronger individualized
within recent years. Although there are still many controversial issues
that have already given rise to an obvious bias, international
organizations and national governments have initiated and
implemented in cooperation or independently such mechanisms.
Consequently, they created the conditions for convergence between
private interests and social interests of nature conservation, so there
are opportunities for ongoing business development which leads,
among other things, the positive effects on biodiversity. Finally,
points out that markets fail to quantify the value of most ecosystem
services. Existing price signals reflect at best, only a proportion of the
total amount corresponding provision of food, water or fuel.
Abstract: Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.
Abstract: In the context of large volume Big Divisor (nearly)
SLagy D3/D7 μ-Split SUSY [1], after an explicit identification
of first generation of SM leptons and quarks with fermionic superpartners
of four Wilson line moduli, we discuss the identification of
gravitino as a potential dark matter candidate by explicitly calculating
the decay life times of gravitino (LSP) to be greater than age of
universe and lifetimes of decays of the co-NLSPs (the first generation
squark/slepton and a neutralino) to the LSP (the gravitino) to be
very small to respect BBN constraints. Interested in non-thermal
production mechanism of gravitino, we evaluate the relic abundance
of gravitino LSP in terms of that of the co-NLSP-s by evaluating
their (co-)annihilation cross sections and hence show that the former
satisfies the requirement for a potential Dark Matter candidate. We
also show that it is possible to obtain a 125 GeV light Higgs in our
setup.
Abstract: This work aims to explore the factors that have an incidence in reading comprehension process, with different type of texts. In a recent study with 2nd, 3rd and 4th grade children, it was observed that reading comprehension of narrative texts was better than comprehension of expository texts. Nevertheless it seems that not only the type of text but also other textual factors would account for comprehension depending on the cognitive processing demands posed by the text. In order to explore this assumption, three narrative and three expository texts were elaborated with different degree of complexity. A group of 40 fourth grade Spanish-speaking children took part in the study. Children were asked to read the texts and answer orally three literal and three inferential questions for each text. The quantitative and qualitative analysis of children responses showed that children had difficulties in both, narrative and expository texts. The problem was to answer those questions that involved establishing complex relationships among information units that were present in the text or that should be activated from children’s previous knowledge to make an inference. Considering the data analysis, it could be concluded that there is some interaction between the type of text and the cognitive processing load of a specific text.
Abstract: Purpose: To explore the use of Curvelet transform to
extract texture features of pulmonary nodules in CT image and support
vector machine to establish prediction model of small solitary
pulmonary nodules in order to promote the ratio of detection and
diagnosis of early-stage lung cancer. Methods: 2461 benign or
malignant small solitary pulmonary nodules in CT image from 129
patients were collected. Fourteen Curvelet transform textural features
were as parameters to establish support vector machine prediction
model. Results: Compared with other methods, using 252 texture
features as parameters to establish prediction model is more proper.
And the classification consistency, sensitivity and specificity for the
model are 81.5%, 93.8% and 38.0% respectively. Conclusion: Based
on texture features extracted from Curvelet transform, support vector
machine prediction model is sensitive to lung cancer, which can
promote the rate of diagnosis for early-stage lung cancer to some
extent.
Abstract: The hybridization of artificial immune system with
cellular automata (CA-AIS) is a novel method. In this hybrid model,
the cellular automaton within each cell deploys the artificial immune
system algorithm under optimization context in order to increase its
fitness by using its neighbor-s efforts. The hybrid model CA-AIS is
introduced to fix the standard artificial immune system-s weaknesses.
The credibility of the proposed approach is evaluated by simulations
and it shows that the proposed approach achieves better results
compared to standard artificial immune system.
Abstract: The efficient knowledge management system (KMS)
is one of the important strategies to help firms to achieve sustainable
competitive advantages, but little research has been conducted to
understand what contributes to the KMS success. This study thus set
to investigate the determinants of KMS success in the context of Thai
banking industry. A questionnaire survey was conducted in four
major Thai Banks to test the proposed KMS Success model.
The result of this study shows that KMS use and user satisfaction
relate significantly to the success of KMS, and knowledge quality,
service quality and trust lead to system use, and knowledge quality,
system quality and trust lead to user satisfaction. However, this
research focuses only on system and user-related factors. Future
research thus can extend to study factors such as management support
and organization readiness.
Abstract: This paper describes a 3D modeling system in
Augmented Reality environment, named 3DARModeler. It can be
considered a simple version of 3D Studio Max with necessary
functions for a modeling system such as creating objects, applying
texture, adding animation, estimating real light sources and casting
shadows. The 3DARModeler introduces convenient, and effective
human-computer interaction to build 3D models by combining both
the traditional input method (mouse/keyboard) and the tangible input
method (markers). It has the ability to align a new virtual object with
the existing parts of a model. The 3DARModeler targets nontechnical
users. As such, they do not need much knowledge of
computer graphics and modeling techniques. All they have to do is
select basic objects, customize their attributes, and put them together
to build a 3D model in a simple and intuitive way as if they were
doing in the real world. Using the hierarchical modeling technique,
the users are able to group several basic objects to manage them as a
unified, complex object. The system can also connect with other 3D
systems by importing and exporting VRML/3Ds Max files. A
module of speech recognition is included in the system to provide
flexible user interfaces.
Abstract: Encryption protects communication partners from
disclosure of their secret messages but cannot prevent traffic analysis
and the leakage of information about “who communicates with
whom". In the presence of collaborating adversaries, this linkability
of actions can danger anonymity. However, reliably providing
anonymity is crucial in many applications. Especially in contextaware
mobile business, where mobile users equipped with PDAs
request and receive services from service providers, providing
anonymous communication is mission-critical and challenging at the
same time. Firstly, the limited performance of mobile devices does
not allow for heavy use of expensive public-key operations which are
commonly used in anonymity protocols. Moreover, the demands for
security depend on the application (e.g., mobile dating vs. pizza
delivery service), but different users (e.g., a celebrity vs. a normal
person) may even require different security levels for the same
application. Considering both hardware limitations of mobile devices
and different sensitivity of users, we propose an anonymity
framework that is dynamically configurable according to user and
application preferences. Our framework is based on Chaum-s mixnet.
We explain the proposed framework, its configuration
parameters for the dynamic behavior and the algorithm to enforce
dynamic anonymity.
Abstract: After the Terengganu state government decided to give a boost in teaching and learning through the allocation of free ebooks to all Primary five and six students; it was time to examine the presence of e-books in the classrooms. A survey was conducted on 101 students to determine how they felt about using the e-book and their experiences. It was discovered that a majority of these students liked using the e-book. However, although they had little problems using the e-book and the e-book helped to lighten the schoolbags, these new-age textbooks were not fully utilized. It is implied that perhaps the school administrators, teachers and students may not be able to overcome the unfamiliar characteristics of the e-book and its limitations.
Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: The ever increasing product diversity and competition on the market of goods and services has dictated the pace of growth in the number of advertisements. Despite their admittedly diminished effectiveness over the recent years, advertisements remain the favored method of sales promotion. Consequently, the challenge for an advertiser is to explore every possible avenue of making an advertisement more noticeable, attractive and impellent for consumers. One way to achieve this is through invoking celebrity endorsements. On the one hand, the use of a celebrity to endorse a product involves substantial costs, however, on the other hand, it does not immediately guarantee the success of an advertisement. The question of how celebrities can be used in advertising to the best advantage is therefore of utmost importance. Celebrity endorsements have become commonplace: empirical evidence indicates that approximately 20 to 25 per cent of advertisements feature some famous person as a product endorser. The popularity of celebrity endorsements demonstrates the relevance of the topic, especially in the context of the current global economic downturn, when companies are forced to save in order to survive, yet simultaneously to heavily invest in advertising and sales promotion. The issue of the effective use of celebrity endorsements also figures prominently in the academic discourse. The study presented below is thus aimed at exploring what qualities (characteristics) of a celebrity endorser have an impact on the ffectiveness of the advertisement in which he/she appears and how.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: In online context, the design and implementation of
effective remote laboratories environment is highly challenging on
account of hardware and software needs. This paper presents the
remote laboratory software framework modified from ilab shared
architecture (ISA). The ISA is a framework which enables students to
remotely acccess and control experimental hardware using internet
infrastructure. The need for remote laboratories came after
experiencing problems imposed by traditional laboratories. Among
them are: the high cost of laboratory equipment, scarcity of space,
scarcity of technical personnel along with the restricted university
budget creates a significant bottleneck on building required
laboratory experiments. The solution to these problems is to build
web-accessible laboratories. Remote laboratories allow students and
educators to interact with real laboratory equipment located
anywhere in the world at anytime. Recently, many universities and
other educational institutions especially in third world countries rely
on simulations because they do not afford the experimental
equipment they require to their students. Remote laboratories enable
users to get real data from real-time hand-on experiments. To
implement many remote laboratories, the system architecture should
be flexible, understandable and easy to implement, so that different
laboratories with different hardware can be deployed easily. The
modifications were made to enable developers to add more
equipment in ISA framework and to attract the new developers to
develop many online laboratories.
Abstract: Transportation is one of the most fundamental
challenges of urban development in contemporary world. On the
other hand, sustainable urban development has received tremendous
public attention in the last few years. This trend in addition to other
factors such as energy cost, environmental concerns, traffic
congestion and the feeling of lack of belonging have contributed to
the development of pedestrian areas. The purpose of this paper is to
study the role of walkable streets in sustainable development of
cities. Accordingly, a documentary research through valid sources
has been utilized to substantiate this study. The findings demonstrate
that walking can lead to sustainable urban development from
physical, social, environmental, cultural, economic and political
aspects. Also, pedestrian areas –which are the main context of
walking- act as focal points of development in cities and have a great
effect on modifying and stimulating of their adjacent urban spaces.
Abstract: Soursop (Anona muricata) is one of the underutilized tropical fruits containing nutrients, particularly dietary fibre and antioxidant properties that are beneficial to human health. This objective of this study is to investigate the feasibility of matured soursop pulp flour (SPF) to be substituted with high-protein wheat flour in bread. Bread formulation was substituted with different levels of SPF (0%, 5%, 10% and 15%). The effect on physicochemical properties and sensory attributes were evaluated. Higher substitution level of SPF resulted in significantly higher (p
Abstract: Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.
Abstract: To reduce accidents in the industry, WSNs(Wireless Sensor
networks)- sensor data is used. WSNs- sensor data has the persistence and
continuity. therefore, we design and exploit the buffer management system that
has the persistence and continuity to avoid and delivery data conflicts. To
develop modules, we use the multi buffers and design the buffer management
modules that transfer sensor data through the context-aware methods.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.