Abstract: Sharing motivations of viral advertisements by
consumers and the impacts of these advertisements on the
perceptions for brand will be questioned in this study. Three
fundamental questions are answered in the study. These are
advertisement watching and sharing motivations of individuals,
criteria of liking viral advertisement and the impact of individual
attitudes for viral advertisement on brand perception respectively.
This study will be carried out via a viral advertisement which was
practiced in Turkey. The data will be collected by survey method and
the sample of the study consists of individuals who experienced the
practice of sample advertisement. Data will be collected by online
survey method and will be analyzed by using SPSS statistical
package program.
Recently traditional advertisement mind have been changing. New
advertising approaches which have significant impacts on consumers
have been argued. Viral advertising is a modernist advertisement
mind which offers significant advantages to brands apart from
traditional advertising channels such as television, radio and
magazines. Viral advertising also known as Electronic Word-of-
Mouth (eWOM) consists of free spread of convincing messages sent
by brands among interpersonal communication. When compared to
the traditional advertising, a more provocative thematic approach is
argued.
The foundation of this approach is to create advertisements that
are worth sharing with others by consumers. When that fact is taken
into consideration, in a manner of speaking it can also be stated that
viral advertising is media engineering.
The content worth sharing makes people being a volunteer
spokesman of a brand and strengthens the emotional bonds among
brand and consumer. Especially for some sectors in countries which
are having traditional advertising channel limitations, viral
advertising creates vital advantages.
Abstract: This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.
Abstract: This article explores the sociological perspectives on
social problems and the role of the media which has a delicate role to
tread in balancing its duty to the public and the victim Whilst social
problems have objective conditions, it is the subjective definition of
such problems that ensure which social problem comes to the fore
and which doesn-t. Further it explores the roles and functions of
policymakers when addressing social problems and the impact of the
inception of media profiling as well as the advantages and
disadvantages of media profiling towards social problems. It focuses
on the inception of media profiling due to its length and a follow up
article will explore how current media profiling towards social
problems have evolved since its inception.
Abstract: This paper explores university course timetabling
problem. There are several characteristics that make scheduling and
timetabling problems particularly difficult to solve: they have huge
search spaces, they are often highly constrained, they require
sophisticated solution representation schemes, and they usually
require very time-consuming fitness evaluation routines. Thus
standard evolutionary algorithms lack of efficiency to deal with
them. In this paper we have proposed a memetic algorithm that
incorporates the problem specific knowledge such that most of
chromosomes generated are decoded into feasible solutions.
Generating vast amount of feasible chromosomes makes the progress
of search process possible in a time efficient manner. Experimental
results exhibit the advantages of the developed Hybrid Genetic
Algorithm than the standard Genetic Algorithm.
Abstract: This paper deals with the helical flow of a Newtonian
fluid in an infinite circular cylinder, due to both longitudinal and
rotational shear stress. The velocity field and the resulting shear
stress are determined by means of the Laplace and finite Hankel
transforms and satisfy all imposed initial and boundary conditions.
For large times, these solutions reduce to the well-known steady-state
solutions.
Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: This paper simulates the ad-hoc mesh network in rural areas, where such networks receive great attention due to their cost, since installing the infrastructure for regular networks in these areas is not possible due to the high cost. The distance between the communicating nodes is the most obstacles that the ad-hoc mesh network will face. For example, in Terranet technology, two nodes can communicate if they are only one kilometer far from each other. However, if the distance between them is more than one kilometer, then each node in the ad-hoc mesh networks has to act as a router that forwards the data it receives to other nodes. In this paper, we try to find the critical number of nodes which makes the network fully connected in a particular area, and then propose a method to enhance the intermediate node to accept to be a router to forward the data from the sender to the receiver. Much work was done on technological changes on peer to peer networks, but the focus of this paper will be on another feature which is to find the minimum number of nodes needed for a particular area to be fully connected and then to enhance the users to switch on their phones and accept to work as a router for other nodes. Our method raises the successful calls to 81.5% out of 100% attempt calls.
Abstract: Australian government agencies have a natural desire
to provide migrants a wide range of opportunities. Consequently,
government online services should be equally available to migrants
with a non-English speaking background (NESB). Despite the
commendable efforts of governments and local agencies in Australia
to provide such services, in reality, many NESB communities are not
taking advantage of these services. This article–based on an
extensive case study regarding the use of online government services
by the Arabic NESB community in Australia–reports on the
possible reasons for this issue, as well as suggestions for
improvement. The conclusion is that Australia should implement
ICT-based or e-government policies, programmes, and services that
more accurately reflect migrant cultures and languages so that
migrant integration can be more fully accomplished. Specifically, this
article presents an NESB Model that adopts the value of usercentricity
or a more individual-focused approach to government
online services in Australia.
Abstract: This paper proposes a new optimization techniques
for the optimization a gas processing plant uncertain feed and
product flows. The problem is first formulated using a continuous
linear deterministic approach. Subsequently, the single and joint
chance constraint models for steady state process with timedependent
uncertainties have been developed. The solution approach
is based on converting the probabilistic problems into their
equivalent deterministic form and solved at different confidence
levels Case study for a real plant operation has been used to
effectively implement the proposed model. The optimization results
indicate that prior decision has to be made for in-operating plant
under uncertain feed and product flows by satisfying all the
constraints at 95% confidence level for single chance constrained and
85% confidence level for joint chance constrained optimizations
cases.
Abstract: Cooktop burners are widely used nowadays. In
cooktop burner design, nozzle efficiency and greenhouse
gas(GHG) emissions mainly depend on heat transfer from the
premixed flame to the impinging surface. This is a complicated
issue depending on the individual and combined effects of various
input combustion variables. Optimal operating conditions for
sustainable burner design were rarely addressed, especially in the
case of multiple slot-jet burners. Through evaluating the optimal
combination of combustion conditions for a premixed slot-jet
array, this paper develops a practical approach for the sustainable
design of gas cooktop burners. Efficiency, CO and NOx emissions
in respect of an array of slot jets using premixed flames were
analysed. Response surface experimental design were applied to
three controllable factors of the combustion process, viz.
Reynolds number, equivalence ratio and jet-to-vessel distance.
Desirability Function Approach(DFA) is the analytic technique
used for the simultaneous optimization of the efficiency and
emission responses.
Abstract: Chemical detection is still a continuous challenge when
it comes to designing single-walled carbon nanotube (SWCNT)
sensors with high selectivity, especially in complex chemical
environments. A perfect example of such an environment would be in
thermally oxidized soybean oil. At elevated temperatures, oil oxidizes
through a series of chemical reactions which results in the formation of
monoacylglycerols, diacylglycerols, oxidized triacylglycerols, dimers,
trimers, polymers, free fatty acids, ketones, aldehydes, alcohols,
esters, and other minor products. In order to detect the rancidity of
oxidized soybean oil, carbon nanotube chemiresistor sensors have
been coated with polyethylenimine (PEI) to enhance the sensitivity
and selectivity. PEI functionalized SWCNTs are known to have a high
selectivity towards strong electron withdrawing molecules. The
sensors were very responsive to different oil oxidation levels and
furthermore, displayed a rapid recovery in ambient air without the
need of heating or UV exposure.
Abstract: Serious games have proven to be a useful instrument
to engage learners and increase motivation. Nevertheless, a broadly
accepted, practical instructional design approach to serious games
does not exist. In this paper, we introduce the use of an instructional
design model that has not been applied to serious games yet, and has
some advantages compared to other design approaches. We present
the case of mechanics mechatronics education to illustrate the close
match with timing and role of knowledge and information that the
instructional design model prescribes and how this has been
translated to a rigidly structured game design. The structured
approach answers the learning needs of applicable knowledge within
the target group. It combines advantages of simulations with
strengths of entertainment games to foster learner-s motivation in the
best possible way. A prototype of the game will be evaluated along a
well-respected evaluation method within an advanced test setting
including test and control group.
Abstract: In this study, an OCR system for segmentation,
feature extraction and recognition of Ottoman Scripts has been
developed using handwritten characters. Detection of handwritten
characters written by humans is a difficult process. Segmentation and
feature extraction stages are based on geometrical feature analysis,
followed by the chain code transformation of the main strokes of
each character. The output of segmentation is well-defined segments
that can be fed into any classification approach. The classes of main
strokes are identified through left-right Hidden Markov Model
(HMM).
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
Abstract: In this paper, based on the coupled-mode and carrier rate equations, derivation of a dynamic model and numerically analysis of a MQW chirped DFB-SOA all-optical flip-flop is done precisely. We have analyzed the effects of strains of QW and MQW and cross phase modulation (XPM) on the dynamic response, and rise and fall times of the DFB-SOA all optical flip flop. We have shown that strained MQW active region in under an optimized condition into a DFB-SOA with chirped grating can improve the switching ON speed limitation in such a of the device, significantly while the fall time is increased. The values of the rise times for such an all optical flip-flop, are obtained in an optimized condition, areas tr=255ps.
Abstract: Over the past years, the EMCCD has had a profound
influence on photon starved imaging applications relying on its unique
multiplication register based on the impact ionization effect in the
silicon. High signal-to-noise ratio (SNR) means high image quality.
Thus, SNR improvement is important for the EMCCD. This work
analyzes the SNR performance of an EMCCD with gain off and on. In
each mode, simplified SNR models are established for different
integration times. The SNR curves are divided into readout noise (or
CIC) region and shot noise region by integration time. Theoretical
SNR values comparing long frame integration and frame adding in
each region are presented and discussed to figure out which method is
more effective. In order to further improve the SNR performance,
pixel binning is introduced into the EMCCD. The results show that
pixel binning does obviously improve the SNR performance, but at the
expensive of the spatial resolution.
Abstract: In wireless sensor network (WSN) the use of mobile
sink has been attracting more attention in recent times. Mobile sinks
are more effective means of balancing load, reducing hotspot
problem and elongating network lifetime. The sensor nodes in WSN
have limited power supply, computational capability and storage and
therefore for continuous data delivery reliability becomes high
priority in these networks. In this paper, we propose a Reliable
Energy-efficient Data Dissemination (REDD) scheme for WSNs with
multiple mobile sinks. In this strategy, sink first determines the
location of source and then directly communicates with the source
using geographical forwarding. Every forwarding node (FN) creates a
local zone comprising some sensor nodes that can act as
representative of FN when it fails. Analytical and simulation study
reveals significant improvement in energy conservation and reliable
data delivery in comparison to existing schemes.
Abstract: This paper presents a new spread-spectrum
watermarking algorithm for digital images in discrete wavelet
transform (DWT) domain. The algorithm is applied for embedding
watermarks like patient identification /source identification or
doctors signature in binary image format into host digital
radiological image for potential telemedicine applications.
Performance of the algorithm is analysed by varying the gain factor,
subband decomposition levels, and size of watermark. Simulation
results show that the proposed method achieves higher watermarking
capacity.
Abstract: Real-time 3D applications have to guarantee
interactive rendering speed. There is a restriction for the number of
polygons which is rendered due to performance of a graphics hardware
or graphics algorithms. Generally, the rendering performance will be
drastically increased when handling only the dynamic 3d models,
which is much fewer than the static ones. Since shapes and colors of
the static objects don-t change when the viewing direction is fixed, the
information can be reused. We render huge amounts of polygon those
cannot handled by conventional rendering techniques in real-time by
using a static object image and merging it with rendering result of the
dynamic objects. The performance must be decreased as a
consequence of updating the static object image including removing
an static object that starts to move, re-rending the other static objects
being overlapped by the moving ones. Based on visibility of the object
beginning to move, we can skip the updating process. As a result, we
enhance rendering performance and reduce differences of rendering
speed between each frame. Proposed method renders total
200,000,000 polygons that consist of 500,000 dynamic polygons and
the rest are static polygons in about 100 frames per second.
Abstract: In this paper, we propose a novel adaptive voltage control strategy for boost converter via Inverse LQ Servo-Control. Our presented strategy is based on an analytical formula of Inverse Linear Quadratic (ILQ) design method, which is not necessary to solve Riccati’s equation directly. The optimal and adaptive controller of the voltage control system is designed. The stability and the robust control are analyzed. Whereas, we can get the analytical solution for the optimal and robust voltage control is achieved through the natural angular velocity within a single parameter and we can change the responses easily via the ILQ control theory. Our method provides effective results as the stable responses and the response times are not drifted even if the condition is changed widely.