Abstract: Technology has been developed dramatically in most of the educational disciplines. For instance, digital rendering subject, which is being taught in both Interior and Architecture fields, is witnessing almost annually updated software versions. A lot of students and educators argued that there will be no need for manual rendering techniques to be learned. Therefore, the Interior Design Visual Presentation 1 course (ID133) has been chosen from the first level of the Interior Design (ID) undergraduate program, as it has been taught for six years continually. This time frame will facilitate sound observation and critical analysis of the use of appropriate teaching methodologies. Furthermore, the researcher believes in the high value of the manual rendering techniques. The course objectives are: to define the basic visual rendering principles, to recall theories and uses of various types of colours and hatches, to raise the learners’ awareness of the value of studying manual render techniques, and to prepare them to present their work professionally. The students are female Arab learners aged between 17 and 20. At the outset of the course, the majority of them demonstrated negative attitude, lacking both motivation and confidence in manual rendering skills. This paper is a reflective appraisal of deploying two student-centred teaching pedagogies which are: Project-based learning (PBL) and Outcome-based education (OBE) on ID133 students. This research aims of developing some teaching strategies to enhance the quality of teaching in this given course over an academic semester. The outcome of this research emphasized the positive influence of applying such educational methods on improving the quality of students’ manual rendering skills in terms of: materials, textiles, textures, lighting, and shade and shadow. Furthermore, it greatly motivated the students and raised the awareness of the importance of learning the manual rendering techniques.
Abstract: As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.
Abstract: The motivation of our work is to detect different
terrain types traversed by a robot based on acoustic data from the
robot-terrain interaction. Different acoustic features and classifiers
were investigated, such as Mel-frequency cepstral coefficient and
Gamma-tone frequency cepstral coefficient for the feature extraction,
and Gaussian mixture model and Feed forward neural network for the
classification. We analyze the system’s performance by comparing
our proposed techniques with some other features surveyed from
distinct related works. We achieve precision and recall values between
87% and 100% per class, and an average accuracy at 95.2%. We also
study the effect of varying audio chunk size in the application phase
of the models and find only a mild impact on performance.
Abstract: Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.
Abstract: Recently, collectable manufacturing data are rapidly
increasing. On the other hand, mega recall is getting serious as
a social problem. Under such circumstances, there are increasing
needs for preventing mega recalls by defect analysis such as
root cause analysis and abnormal detection utilizing manufacturing
data. However, the time to classify strings in manufacturing data
by traditional method is too long to meet requirement of quick
defect analysis. Therefore, we present String Length Distribution
Classification method (SLDC) to correctly classify strings in a short
time. This method learns character features, especially string length
distribution from Product ID, Machine ID in BOM and asset list.
By applying the proposal to strings in actual manufacturing data, we
verified that the classification time of strings can be reduced by 80%.
As a result, it can be estimated that the requirement of quick defect
analysis can be fulfilled.
Abstract: This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.
Abstract: Navigational ability requires spatial representation, planning, and memory. It covers three interdependent domains, i.e. cognitive and perceptual factors, neural information processing, and variability in brain microstructure. Many attempts have been made to see the role of spatial representation in the navigational ability, and the individual differences have been identified in the neural substrate. But, there is also a need to address the influence of planning, memory on navigational ability. The present study aims to evaluate relations of aforementioned factors in the navigational ability. Total 30 participants volunteered in the study of a virtual shopping complex and subsequently were classified into good and bad navigators based on their performances. The result showed that planning ability was the most correlated factor for the navigational ability and also the discriminating factor between the good and bad navigators. There was also found the correlations between spatial memory recall and navigational ability. However, non-verbal episodic memory and spatial memory recall were also found to be correlated with the learning variable. This study attempts to identify differences between people with more and less navigational ability on the basis of planning and memory.
Abstract: There has been renewal of interest in the relation between Green IT and cloud computing in recent years. Cloud computing has to be a highly elastic environment which provides stable services to users. The growing use of cloud computing facilities has caused marked energy consumption, putting negative pressure on electricity cost of computing center or data center. Each year more and more network devices, storages and computers are purchased and put to use, but it is not just the number of computers that is driving energy consumption upward. We could foresee that the power consumption of cloud computing facilities will double, triple, or even more in the next decade. This paper aims at resource allocation and scheduling technologies that are short of or have not well developed yet to reduce energy utilization in cloud computing platform. In particular, our approach relies on recalling services dynamically onto appropriate amount of the machines according to user’s requirement and temporarily shutting down the machines after finish in order to conserve energy. We present initial work on integration of resource and power management system that focuses on reducing power consumption such that they suffice for meeting the minimizing quality of service required by the cloud computing platform.
Abstract: As we know, number of Internet users are increasing drastically. Now, people are using different online services provided by banks, colleges/schools, hospitals, online utility, bill payment and online shopping sites. To access online services, text-based authentication system is in use. The text-based authentication scheme faces some drawbacks with usability and security issues that bring troubles to users. The core element of computational trust is identity. The aim of the paper is to make the system more compliable for the imposters and more reliable for the users, by using the graphical authentication approach. In this paper, we are using the more powerful tool of encoding the options in graphical QR format and also there will be the acknowledgment which will send to the user’s mobile for final verification. The main methodology depends upon the encryption option and final verification by confirming a set of pass phrase on the legal users, the outcome of the result is very powerful as it only gives the result at once when the process is successfully done. All processes are cross linked serially as the output of the 1st process, is the input of the 2nd and so on. The system is a combination of recognition and pure recall based technique. Presented scheme is useful for devices like PDAs, iPod, phone etc. which are more handy and convenient to use than traditional desktop computer systems.
Abstract: Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.
Abstract: Gestures play a major role in comprehension and
memory recall due to the fact that aid the efficient channel of
the meaning and support listeners’ comprehension and memory. In
the present study, the assistance of two kinds of gestures (iconic
and beat gestures) is tested in regards to memory and recall. The
hypothesis investigated here is whether or not iconic and beat gestures
provide assistance in memory and recall in Greek and in Greek
speakers’ second language. Two groups of participants were formed,
one comprising Greeks that reside in Athens and one with Greeks
that reside in Copenhagen. Three kinds of stimuli were used: A video
with words accompanied with iconic gestures, a video with words
accompanied with beat gestures and a video with words alone. The
languages used are Greek and English. The words in the English
videos were spoken by a native English speaker and by a Greek
speaker talking English. The reason for this is that when it comes to
beat gestures that serve a meta-cognitive function and are generated
according to the intonation of a language, prosody plays a major
role. Thus, participants that have different influences in prosody may
generate different results from rhythmic gestures. Memory recall was
assessed by asking the participants to try to remember as many
words as they could after viewing each video. Results show that
iconic gestures provide significant assistance in memory and recall
in Greek and in English whether they are produced by a native or
a second language speaker. In the case of beat gestures though, the
findings indicate that beat gestures may not play such a significant
role in Greek language. As far as intonation is concerned, a significant
difference was not found in the case of beat gestures produced by a
native English speaker and by a Greek speaker talking English.
Abstract: Recently a new type of very general relational
structures, the so called (L-)complete propelattices, was introduced.
These significantly generalize complete lattices and completely lattice
L-ordered sets, because they do not assume the technically very
strong property of transitivity. For these structures also the main part
of the original Tarski’s fixed point theorem holds for (L-fuzzy) isotone
maps, i.e., the part which concerns the existence of fixed points and
the structure of their set. In this paper, fundamental properties of
(L-)complete propelattices are recalled and the so called L-fuzzy
relatively isotone maps are introduced. For these maps it is proved
that they also have fixed points in L-complete propelattices, even if
their set does not have to be of an awaited analogous structure of
a complete propelattice.
Abstract: As far as incidental vocabulary learning is concerned, the basic contention of the Involvement Load Hypothesis (ILH) is that retention of unfamiliar words is, generally, conditional upon the degree of involvement in processing them. This study examined input modality and incidental vocabulary uptake in a task-induced setting whereby three variously loaded task types (marginal glosses, fill-in-task, and sentence-writing) were alternately assigned to one group of students at Allameh Tabataba’i University (n=2l) during six classroom sessions. While one round of exposure was comprised of the audiovisual medium (TV talk shows), the second round consisted of textual materials with approximately similar subject matter (reading texts). In both conditions, however, the tasks were equivalent to one another. Taken together, the study pursued the dual objectives of establishing a litmus test for the ILH and its proposed values of ‘need’, ‘search’ and ‘evaluation’ in the first place. Secondly, it sought to bring to light the superiority issue of exposure to audiovisual input versus the written input as far as the incorporation of tasks is concerned. At the end of each treatment session, a vocabulary active recall test was administered to measure their incidental gains. Running a one-way analysis of variance revealed that the audiovisual intervention yielded higher gains than the written version even when differing tasks were included. Meanwhile, task 'three' (sentence-writing) turned out the most efficient in tapping learners' active recall of the target vocabulary items. In addition to shedding light on the superiority of audiovisual input over the written input when circumstances are relatively held constant, this study for the most part, did support the underlying tenets of ILH.
Abstract: Digital technologies offer many opportunities in the
design and implementation of brand communication and advertising.
Augmented reality (AR) is an innovative technology in marketing
communication that focuses on the fact that virtual interaction with a
product ad offers additional value to consumers. AR enables
consumers to obtain (almost) real product experiences by the way of
virtual information even before the purchase of a certain product.
Aim of AR applications in relation with advertising is in-depth
examination of product characteristics to enhance product knowledge
as well as brand knowledge. Interactive design of advertising
provides observers with an intense examination of a specific
advertising message and therefore leads to better brand knowledge.
The elaboration likelihood model and the central route to persuasion
strongly support this argumentation. Nevertheless, AR in brand
communication is still in an initial stage and therefore scientific
findings about the impact of AR on information processing and brand
attitude are rare. The aim of this paper is to empirically investigate
the potential of AR applications in combination with traditional print
advertising. To that effect an experimental design with different
levels of interactivity is built to measure the impact of interactivity of
an ad on different variables o advertising effectiveness.
Abstract: Recently, traffic monitoring has attracted the attention
of computer vision researchers. Many algorithms have been
developed to detect and track moving vehicles. In fact, vehicle
tracking in daytime and in nighttime cannot be approached with the
same techniques, due to the extreme different illumination conditions.
Consequently, traffic-monitoring systems are in need of having a
component to differentiate between daytime and nighttime scenes. In
this paper, a HSV-based day/night detector is proposed for traffic
monitoring scenes. The detector employs the hue-histogram and the
value-histogram on the top half of the image frame. Experimental
results show that the extraction of the brightness features along with
the color features within the top region of the image is effective for
classifying traffic scenes. In addition, the detector achieves high
precision and recall rates along with it is feasible for real time
applications.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: Cancer is still one of the serious diseases threatening
the lives of human beings. How to have an early diagnosis and
effective treatment for tumors is a very important issue. The animal
carcinoma model can provide a simulation tool for the studies of
pathogenesis, biological characteristics, and therapeutic effects.
Recently, drug delivery systems have been rapidly developed to
effectively improve the therapeutic effects. Liposome plays an
increasingly important role in clinical diagnosis and therapy for
delivering a pharmaceutic or contrast agent to the targeted sites.
Liposome can be absorbed and excreted by the human body, and is
well known that no harm to the human body. This study aimed to
compare the therapeutic effects between encapsulated (doxorubicin
liposomal, Lipodox) and un-encapsulated (doxorubicin, Dox)
anti-tumor drugs using magnetic resonance imaging (MRI).
Twenty-four New Zealand rabbits implanted with VX2 carcinoma at
left thighs were classified into three groups: control group (untreated),
Dox-treated group, and LipoDox-treated group, 8 rabbits for each
group. MRI scans were performed three days after tumor implantation.
A 1.5T GE Signa HDxt whole body MRI scanner with a high
resolution knee coil was used in this study. After a 3-plane localizer
scan was performed, three-dimensional (3D) fast spin echo (FSE)
T2-weighted Images (T2WI) was used for tumor volumetric
quantification. Afterwards, two-dimensional (2D) spoiled gradient
recalled echo (SPGR) dynamic contrast-enhanced (DCE) MRI was
used for tumor perfusion evaluation. DCE-MRI was designed to
acquire four baseline images, followed by contrast agent Gd-DOTA
injection through the ear vein of rabbit. A series of 32 images were
acquired to observe the signals change over time in the tumor and
muscle. The MRI scanning was scheduled on a weekly basis for a
period of four weeks to observe the tumor progression longitudinally.
The Dox and LipoDox treatments were prescribed 3 times in the first
week immediately after the first MRI scan; i.e. 3 days after VX2 tumor
implantation. ImageJ was used to quantitate tumor volume and time
course signal enhancement on DCE images. The changes of tumor size
showed that the growth of VX2 tumors was effectively inhibited for
both LipoDox-treated and Dox-treated groups. Furthermore, the tumor
volume of LipoDox-treated group was significantly lower than that of
Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is
significantly lower than that of the other two groups, which implies
that targeted therapeutic drug remained in the tumor tissue. This study
provides a radiation-free and non-invasive MRI method for
therapeutic monitoring of targeted liposome on an animal tumor
model.
Abstract: Objective: Sharing devastating news with patients is
often considered the most difficult task of doctors. This study aimed
to explore patients’ perceptions of receiving bad news including
which features improve the experience and which areas need refining. Methods: A questionnaire was written based on the steps of the
SPIKES model for breaking bad new. 20 patients receiving treatment
for a hematological malignancy completed the questionnaire. Results: Overall, the results are promising as most patients praised
their consultation. ‘Poor’ was more commonly rated by women and
participants aged 45-64. The main differences between the ‘excellent’
and ‘poor’ consultations include the doctor’s sensitivity and checking
the patients’ understanding. Only 35% of patients were asked their
existing knowledge and 85% of consultations failed to discuss the
impact of the diagnosis on daily life. Conclusion: This study agreed with the consensus of existing
literature. The commended aspects include consultation set-up and
information given. Areas patients felt needed improvement include
doctors determining the patient’s existing knowledge and checking
new information has been understood. Doctors should also explore
how the diagnosis will affect the patient’s life. With a poorer
prognosis, doctors should work on conveying appropriate hope. The
study was limited by a small sample size and potential recall bias.
Abstract: This paper presents the local mesh co-occurrence
patterns (LMCoP) using HSV color space for image retrieval system.
HSV color space is used in this method to utilize color, intensity and
brightness of images. Local mesh patterns are applied to define the
local information of image and gray level co-occurrence is used to
obtain the co-occurrence of LMeP pixels. Local mesh co-occurrence
pattern extracts the local directional information from local mesh
pattern and converts it into a well-mannered feature vector using gray
level co-occurrence matrix. The proposed method is tested on three
different databases called MIT VisTex, Corel, and STex. Also, this
algorithm is compared with existing methods, and results in terms of
precision and recall are shown in this paper.
Abstract: Due to today’s globalization as well as outsourcing
practices of the companies, the Supply Chain (SC) performances
have become more dependent on the efficient movement of material
among places that are geographically dispersed, where there is more
chance for disruptions. One such disruption is the quality and
delivery uncertainties of outsourcing. These uncertainties could lead
the products to be unsafe and, as is the case in a number of recent
examples, companies may have to end up in recalling their products.
As a result of these problems, there is a need to develop a
methodology for selecting suppliers globally in view of risks
associated with low quality and late delivery. Accordingly, we
developed a two-stage stochastic model that captures the risks
associated with uncertainty in quality and delivery as well as a
solution procedure for the model. The stochastic model developed
simultaneously optimizes supplier selection and purchase quantities
under price discounts over a time horizon. In particular, our target is
the study of global organizations with multiple sites and multiple
overseas suppliers, where the pricing is offered in suppliers’ local
currencies. Our proposed methodology is applied to a case study for a
US automotive company having two assembly plants and four
potential global suppliers to illustrate how the proposed model works
in practice.