Abstract: The 20th century has brought much development to the practice of Architecture worldwide, and technology has bridged inhabitation limits in many regions of the world with high levels of comfort and conveniences, most times at high costs to the environment. Throughout the globe, the tropical countries are being urbanized at an unprecedented rate and housing has become a major issue worldwide, in light of increased demand and lack of appropriate infra-structure and planning. Buildings and urban spaces designed in tropical cities have mainly adopted external concepts that in most cases do not fit the needs of the inhabitants living in such harsh climatic environment, and when they do, do so at high financial, environmental and cultural costs. Traditional architectural practices can provide valuable understanding on how self-reliance and autonomy of construction can be reinforced in rural-urban tropical environments. From traditional housing knowledge, it is possible to derive lessons for the development of new construction materials that are affordable, environmentally friendly, culturally acceptable and accesible to all.Specifically to the urban context, such solutions are of outmost importance, given the needs to a more democratic society, where access to housing is considered high in the agenda for development. Traditional or rural constructions are also ongoing through extensive changes eventhough they have mostly adopted climate-responsive building practices relying on local resources (with minimum embodied energy) and energy (for comfort and quality of life). It is important to note that many of these buildings can actually be called zero-energy, and hold potential answers to enable transition from high energy, high cost, low comfort urban habitations to zero/low energy habitations with high quality urban livelihood. Increasing access to modern urban lifestyels have also an effect on the aspirations from people in terms of performance, comfort and convenience in terms of their housing and the way it is produced and used. These aspirations are resulting in transitions from localresource dependent habitations- to non-local resource based highenergy urban style habitations. And such transitions are resulting in the habitations becoming increasingly unsuited to the local climatic conditions with increasing discomfort, ill-health, and increased CO2 emissions and local environmental disruption. This research studies one specific transition group in the context of 'water communities' in tropical-equatorial regions: Ribeirinhos housing typology (Amazonas, Brazil). The paper presents the results of a qualitative sustainability assessment of the housing typologies under transition, found at the Ribeirinhos communities.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: In developing a text-to-speech system, it is well
known that the accuracy of information extracted from a text is
crucial to produce high quality synthesized speech. In this paper, a
new scheme for converting text into its equivalent phonetic spelling
is introduced and developed. This method is applicable to many
applications in text to speech converting systems and has many
advantages over other methods. The proposed method can also
complement the other methods with a purpose of improving their
performance. The proposed method is a probabilistic model and is
based on Smooth Ergodic Hidden Markov Model. This model can be
considered as an extension to HMM. The proposed method is applied
to Persian language and its accuracy in converting text to speech
phonetics is evaluated using simulations.
Abstract: The early diagnostic decision making in industrial processes is absolutely necessary to produce high quality final products. It helps to provide early warning for a special event in a process, and finding its assignable cause can be obtained. This work presents a hybrid diagnostic schmes for batch processes. Nonlinear representation of raw process data is combined with classification tree techniques. The nonlinear kernel-based dimension reduction is executed for nonlinear classification decision boundaries for fault classes. In order to enhance diagnosis performance for batch processes, filtering of the data is performed to get rid of the irrelevant information of the process data. For the diagnosis performance of several representation, filtering, and future observation estimation methods, four diagnostic schemes are evaluated. In this work, the performance of the presented diagnosis schemes is demonstrated using batch process data.
Abstract: Meshing is the process of discretizing problem
domain into many sub domains before the numerical calculation can
be performed. One of the most popular meshes among many types of meshes is tetrahedral mesh, due to their flexibility to fit into almost
any domain shape. In both 2D and 3D domains, triangular and tetrahedral meshes can be generated by using Delaunay triangulation.
The quality of mesh is an important factor in performing any Computational Fluid Dynamics (CFD) simulations as the results is
highly affected by the mesh quality. Many efforts had been done in
order to improve the quality of the mesh. The paper describes a mesh
generation routine which has been developed capable of generating
high quality tetrahedral cells in arbitrary complex geometry. A few
test cases in CFD problems are used for testing the mesh generator.
The result of the mesh is compared with the one generated by a
commercial software. The results show that no sliver exists for the
meshes generated, and the overall quality is acceptable since the percentage of the bad tetrahedral is relatively small. The boundary
recovery was also successfully done where all the missing faces are
rebuilt.
Abstract: The right to housing is a basic need while good
quality and affordable housing is a reflection of a high quality of life.
However, housing remains a major problem for most, especially for
the bottom billions. Satisfaction on housing and neighbourhood
conditions are one of the important indicators that reflect quality of
life. These indicators are also important in the process of evaluating
housing policy with the objective to increase the quality of housing
and neighbourhood. The research method is purely based on a
quantitative method, using a survey. The findings show that housing
purchasing trend in urban Malaysia is determined by demographic
profiles, mainly by education level, age, gender and income. The
period of housing ownership also influenced the socio-cultural
interactions and satisfaction of house owners with their
neighbourhoods. The findings also show that the main concerns for
house buyers in urban areas are price and location of the house.
Respondents feel that houses in urban Malaysia is too expensive and
beyond their affordability. Location of houses and distance from
work place are also regarded as the main concern. However,
respondents are fairly satisfied with religious and socio-cultural
facilities in the housing areas and most importantly not many regard
ethnicity as an issue in their decision-making, when buying a house.
Abstract: Perth will run out of available sustainable natural
water resources by 2015 if nothing is done to slow usage rates,
according to a Western Australian study [1]. Alternative water
technology options need to be considered for the long-term
guaranteed supply of water for agricultural, commercial, domestic
and industrial purposes. Seawater is an alternative source of water for
human consumption, because seawater can be desalinated and
supplied in large quantities to a very high quality.
While seawater desalination is a promising option, the technology
requires a large amount of energy which is typically generated from
fossil fuels. The combustion of fossil fuels emits greenhouse gases
(GHG) and, is implicated in climate change. In addition to
environmental emissions from electricity generation for desalination,
greenhouse gases are emitted in the production of chemicals and
membranes for water treatment. Since Australia is a signatory to the
Kyoto Protocol, it is important to quantify greenhouse gas emissions
from desalinated water production.
A life cycle assessment (LCA) has been carried out to determine
the greenhouse gas emissions from the production of 1 gigalitre (GL)
of water from the new plant. In this LCA analysis, a new desalination
plant that will be installed in Bunbury, Western Australia, and known
as Southern Seawater Desalinization Plant (SSDP), was taken as a
case study. The system boundary of the LCA mainly consists of three
stages: seawater extraction, treatment and delivery. The analysis
found that the equivalent of 3,890 tonnes of CO2 could be emitted
from the production of 1 GL of desalinated water. This LCA analysis
has also identified that the reverse osmosis process would cause the
most significant greenhouse emissions as a result of the electricity
used if this is generated from fossil fuels
Abstract: All around the world pulp and paper industries are the
biggest plant production with the environmental pollution as the
biggest challenge facing the pulp manufacturing operations. The
concern among these industries is to produce a high volume of papers
with the high quality standard and of low cost without affecting the
environment. This result obtained from this bleaching study show
that the activation of peroxide was an effective method of reducing
the total applied charge of chlorine dioxide which is harmful to our
environment and also show that softwood and hardwood Kraft pulps
responded linearly to the peroxide treatments. During the bleaching
process the production plant produce chlorines. Under the trial stages
chloride dioxide has been reduced by 3 kg/ton to reduce the
brightness from 65% ISO to 60% ISO of pulp and the dosing point
returned to the E stage charges by pre-treating Kraft pulps with
hydrogen peroxide. The pulp and paper industry has developed
elemental chlorine free (ECF) and totally chlorine free (TCF)
bleaching, in their quest for being environmental friendly, they have
been looking at ways to turn their ECF process into a TCF process
while still being competitive. This prompted the research to
investigate the capability of the hydrogen peroxide as catalyst to
reduce chloride dioxide.
Abstract: A key to success of high quality software development
is to define valid and feasible requirements specification. We have
proposed a method of model-driven requirements analysis using
Unified Modeling Language (UML). The main feature of our method
is to automatically generate a Web user interface mock-up from UML
requirements analysis model so that we can confirm validity of
input/output data for each page and page transition on the system by
directly operating the mock-up. This paper proposes a support method
to check the validity of a data life cycle by using a model checking tool
“UPPAAL" focusing on CRUD (Create, Read, Update and Delete).
Exhaustive checking improves the quality of requirements analysis
model which are validated by the customers through automatically
generated mock-up. The effectiveness of our method is discussed by a
case study of requirements modeling of two small projects which are a
library management system and a supportive sales system for text
books in a university.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.
Abstract: The overriding goal of software engineering is to
provide a high quality system, application or a product. To achieve
this goal, software engineers must apply effective methods coupled
with modern tools within the context of a mature software process
[2]. In addition, it is also must to assure that high quality is realized.
Although many quality measures can be collected at the project
levels, the important measures are errors and defects. Deriving a
quality measure for reusable components has proven to be
challenging task now a days. The results obtained from the study are
based on the empirical evidence of reuse practices, as emerged from
the analysis of industrial projects. Both large and small companies,
working in a variety of business domains, and using object-oriented
and procedural development approaches contributed towards this
study. This paper proposes a quality metric that provides benefit at
both project and process level, namely defect removal efficiency
(DRE).
Abstract: In this paper, we construct and implement a new
Steganography algorithm based on learning system to hide a large
amount of information into color BMP image. We have used adaptive
image filtering and adaptive non-uniform image segmentation with
bits replacement on the appropriate pixels. These pixels are selected
randomly rather than sequentially by using new concept defined by
main cases with sub cases for each byte in one pixel. According to
the steps of design, we have been concluded 16 main cases with their
sub cases that covere all aspects of the input information into color
bitmap image. High security layers have been proposed through four
layers of security to make it difficult to break the encryption of the
input information and confuse steganalysis too. Learning system has
been introduces at the fourth layer of security through neural
network. This layer is used to increase the difficulties of the statistical
attacks. Our results against statistical and visual attacks are discussed
before and after using the learning system and we make comparison
with the previous Steganography algorithm. We show that our
algorithm can embed efficiently a large amount of information that
has been reached to 75% of the image size (replace 18 bits for each
pixel as a maximum) with high quality of the output.
Abstract: Magnetic and semiconductor nanomaterials exhibit
novel magnetic and optical properties owing to their unique size and
shape-dependent effects. With shrinking the size down to nanoscale
region, various anomalous properties that normally not present in bulk
start to dominate. Ability in harnessing of these anomalous properties
for the design of various advance electronic devices is strictly
dependent on synthetic strategies. Hence, current research has focused
on developing a rational synthetic control to produce high quality
nanocrystals by using organometallic approach to tune both size and
shape of the nanomaterials. In order to elucidate the growth
mechanism, transmission electron microscopy was employed as a
powerful tool in performing real time-resolved morphologies and
structural characterization of magnetic (Fe3O4) and semiconductor
(ZnO) nanocrystals. The current synthetic approach is found able to
produce nanostructures with well-defined shapes. We have found that
oleic acid is an effective capping ligand in preparing oxide-based
nanostructures without any agglomerations, even at high temperature.
The oleate-based precursors and capping ligands are fatty acid
compounds, which are respectively originated from natural palm oil
with low toxicity. In comparison with other synthetic approaches in
producing nanostructures, current synthetic method offers an effective
route to produce oxide-based nanomaterials with well-defined shapes
and good monodispersity. The nanocystals are well-separated with
each other without any stacking effect. In addition, the as-synthesized
nanopellets are stable in terms of chemically and physically if
compared to those nanomaterials that are previous reported. Further
development and extension of current synthetic strategy are being
pursued to combine both of these materials into nanocomposite form
that will be used as “smart magnetic nanophotocatalyst" for industry
waste water treatment.
Abstract: Cs-type nanocomposite zeolite membrane was successfully synthesized on an alumina ceramic hollow fibre with a mean outer diameter of 1.7 mm; cesium cationic exchange test was carried out inside test module with mean wall thickness of 230 μm and an average crossing pore size smaller than 0.2 μm. Separation factor of n-butane/H2 obtained indicate that a relatively high quality closed to 20. Maxwell-Stefan modeling provides an equivalent thickness lower than 1 µm. To compare the difference an application to CO2/N2 separation has been achieved, reaching separation factors close to (4,18) before and after cation exchange on H-zeolite membrane formed within the pores of a ceramic alumina substrate.
Abstract: This research aimed to study the market feasibility for
new brand coffee house, the case study of Thailand.. This study is a
mixed methods research combining quantitative research and the
qualitative research. Primary data 350 sets of questionnaires were
distributed, and the high quality completed questionnaires of 320 sets
returned. Research samples are identified as customers’ of Hi-end
department stores in Thailand. The sources of secondary data were
critical selected from highly reliable sources, both from public and
private sectors. The results were used to classify the customer group
into two main groups, the younger than 25 and the older than 25years
old. Results of the younger group, are give priority to the dimension
of coffee house and its services dimension more than others, then
branding dimension and the product dimension respectively. On the
other hand, the older group give the difference result as they rate the
important of the branding, coffee house and its services, then the
product respectively. Coffee consuming is not just the trend but it
has become part of people lifestyle. And the new cultures also created
by the wise businessman. Coffee was long produced and consumed in
Thailand. But it is surprisingly the hi-end brand coffee houses in Thai
market are mostly imported brands. The café business possibility for
Thai brand coffee house in Thai market were discussed in the paper.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Abstract: Microwave energy is a superior alternative to several other thermal treatments. Extraction techniques are widely employed for the isolation of bioactive compounds and vegetable oils from oil seeds. Among the different and new available techniques, microwave pretreatment of seeds is a simple and desirable method for production of high quality vegetable oils. Microwave pretreatment for oil extraction has many advantages as follow: improving oil extraction yield and quality, direct extraction capability, lower energy consumption, faster processing time and reduced solvent levels compared with conventional methods. It allows also for better retention and availability of desirable nutraceuticals, such as phytosterols and tocopherols, canolol and phenolic compounds in the extracted oil such as rapeseed oil. This can be a new step to produce nutritional vegetable oils with improved shelf life because of high antioxidant content.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.