Knowledge Transfer and the Translation of Technical Texts

This paper contributes to the ongoing debate as to the relevance of translation studies to professional practitioners. It exposes the various misconceptions permeating the links between theory and practice in the translation landscape in the Arab World. It is a thesis of this paper that specialization in translation should be redefined; taking account of the fact, that specialized knowledge alone is neither crucial nor sufficient in technical translation. It should be tested against the readability of the translated text, the appropriateness of its style and the usability of its content by endusers to carry out their intended tasks. The paper also proposes a preliminary model to establish a working link between theory and practice from the perspective of professional trainers and practitioners, calling for the latter to participate in the production of knowledge in a systematic fashion. While this proposal is driven by a rather intuitive conviction, a research line is needed to specify the methodological moves to establish the mediation strategies that would relate the components in the model of knowledge transfer proposed in this paper. 

Applying Kinect on the Development of a Customized 3D Mannequin

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Microfluidic Continuous Approaches to Produce Magnetic Nanoparticles with Homogeneous Size Distribution

We present a gas-liquid microfluidic system as a reactor to obtain magnetite nanoparticles with an excellent degree of control regarding their crystalline phase, shape and size. Several types of microflow approaches were selected to prevent nanomaterial aggregation and to promote homogenous size distribution. The selected reactor consists of a mixer stage aided by ultrasound waves and a reaction stage using a N2-liquid segmented flow to prevent magnetite oxidation to non-magnetic phases. A milli-fluidic reactor was developed to increase the production rate where a magnetite throughput close to 450 mg/h in a continuous fashion was obtained.

Urban Citizenship in a Sensor Rich Society

Urban public spaces are sutured with a range of surveillance and sensor technologies that claim to enable new forms of ‘data based citizen participation’, but also increase the tendency for ‘function-creep’, whereby vast amounts of data are gathered, stored and analysed in a broad application of urban surveillance. This kind of monitoring and capacity for surveillance connects with attempts by civic authorities to regulate, restrict, rebrand and reframe urban public spaces. A direct consequence of the increasingly security driven, policed, privatised and surveilled nature of public space is the exclusion or ‘unfavourable inclusion’ of those considered flawed and unwelcome in the ‘spectacular’ consumption spaces of many major urban centres. In the name of urban regeneration, programs of securitisation, ‘gentrification’ and ‘creative’ and ‘smart’ city initiatives refashion public space as sites of selective inclusion and exclusion. In this context of monitoring and control procedures, in particular, children and young people’s use of space in parks, neighbourhoods, shopping malls and streets is often viewed as a threat to the social order, requiring various forms of remedial action. This paper suggests that cities, places and spaces and those who seek to use them, can be resilient in working to maintain and extend democratic freedoms and processes enshrined in Marshall’s concept of citizenship, calling sensor and surveillance systems to account. Such accountability could better inform the implementation of public policy around the design, build and governance of public space and also understandings of urban citizenship in the sensor saturated urban environment.

Distributed Self-Healing Protocol for Unattended Wireless Sensor Network

Wireless sensor network is vulnerable to a wide range of attacks. Recover secrecy after compromise, to develop technique that can detect intrusions and able to resilient networks that isolates the point(s) of intrusion while maintaining network connectivity for other legitimate users. To define new security metrics to evaluate collaborative intrusion resilience protocol, by leveraging the sensor mobility that allows compromised sensors to recover secure state after compromise. This is obtained with very low overhead and in a fully distributed fashion using extensive simulations support our findings.

On Algebraic Structure of Improved Gauss-Seidel Iteration

Analysis of real life problems often results in linear systems of equations for which solutions are sought. The method to employ depends, to some extent, on the properties of the coefficient matrix. It is not always feasible to solve linear systems of equations by direct methods, as such the need to use an iterative method becomes imperative. Before an iterative method can be employed to solve a linear system of equations there must be a guaranty that the process of solution will converge. This guaranty, which must be determined apriori, involve the use of some criterion expressible in terms of the entries of the coefficient matrix. It is, therefore, logical that the convergence criterion should depend implicitly on the algebraic structure of such a method. However, in deference to this view is the practice of conducting convergence analysis for Gauss- Seidel iteration on a criterion formulated based on the algebraic structure of Jacobi iteration. To remedy this anomaly, the Gauss- Seidel iteration was studied for its algebraic structure and contrary to the usual assumption, it was discovered that some property of the iteration matrix of Gauss-Seidel method is only diagonally dominant in its first row while the other rows do not satisfy diagonal dominance. With the aid of this structure we herein fashion out an improved version of Gauss-Seidel iteration with the prospect of enhancing convergence and robustness of the method. A numerical section is included to demonstrate the validity of the theoretical results obtained for the improved Gauss-Seidel method.

Biometric Steganography Using Variable Length Embedding

Recent growth in digital multimedia technologies has presented a lot of facilities in information transmission, reproduction and manipulation. Therefore, the concept of information security is one of the superior articles in the present day situation. The biometric information security is one of the information security mechanisms. It has the advantages as well as disadvantages. The biometric system is at risk to a range of attacks. These attacks are anticipated to bypass the security system or to suspend the normal functioning. Various hazards have been discovered while using biometric system. Proper use of steganography greatly reduces the risks in biometric systems from the hackers. Steganography is one of the fashionable information hiding technique. The goal of steganography is to hide information inside a cover medium like text, image, audio, video etc. through which it is not possible to detect the existence of the secret information. Here in this paper a new security concept has been established by making the system more secure with the help of steganography along with biometric security. Here the biometric information has been embedded to a skin tone portion of an image with the help of proposed steganographic technique.

Rheological Behavior of Fresh Activated Sludge

Despite of few research works on municipal sludge, still there is a lack of actual data. Thus, this work was focused on the conditioning and rheology of fresh activated sludge. The effect of cationic polyelectrolyte has been investigated at different concentrations and pH values in a comparative fashion. Yield stress is presented in all results indicating the minimum stress that necessary to reach flow conditions. Connections between particle-particle is the reason for this yield stress, also, the addition of polyelectrolyte causes strong bonds between particles and water resulting in the aggregation of particles which required higher shear stress in order to flow. The results from the experiments indicate that the cationic polyelectrolytes have significant effluence on the sludge characteristic and water quality such as turbidity, SVI, zone settling rate and shear stress.

Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions

This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10cm) and B (62cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seatarrangement is limited to face-to-face design, the distance between seats should be increased to at least 62cm. Moreover, the distance between seats should be increased to at least 114cm for side- by-side seats to elevate user comfort.

Numerical Study on the Flow around a Steadily Rotating Spring: Understanding the Propulsion of a Bacterial Flagellum

The propulsion of a bacterial flagellum in a viscous fluid has attracted many interests in the field of biological hydrodynamics, but remains yet fully understood and thus still a challenging problem. In this study, therefore, we have numerically investigated the flow around a steadily rotating micro-sized spring to further understand such bacterial flagellum propulsion. Note that a bacterium gains thrust (propulsive force) by rotating the flagellum connected to the body through a bio motor to move forward. For the investigation, we convert the spring model from the micro scale to the macro scale using a similitude law (scale law) and perform simulations on the converted macro-scale model using a commercial software package, CFX v13 (ANSYS). To scrutinize the propulsion characteristics of the flagellum through the simulations, we make parameter studies by changing some flow parameters, such as the pitch, helical radius and rotational speed of the spring and the Reynolds number (or fluid viscosity), expected to affect the thrust force experienced by the rotating spring. Results show that the propulsion characteristics depend strongly on the parameters mentioned above. It is observed that the forward thrust increases in a linear fashion with either of the rotational speed or the fluid viscosity. In addition, the thrust is directly proportional to square of the helical radius and but the thrust force is increased and then decreased based on the peak value to the pitch. Finally, we also present the appropriate flow and pressure fields visualized to support the observations.

A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Conditioning Process of Fresh Activated Sludge

The effect of polyelectrolytes; cationic and anionic charges and coagulants have been investigated for fresh activated sludge at different concentrations and pH values in a comparative fashion. The results from the experiments indicate that the cationic polyelectrolytes have a significant effluence on the sludge characteristic, degree of flocculation and water quality such as turbidity and SVI. The results show that the cationic CPAM-80 is the most effective polyelectrolyte used corresponding to turbidity and SVI despite of the variations in feed properties of the fresh activated sludge.

Off-Line Signature Recognition Based On Angle Features and GRNN Neural Networks

This research presents a handwritten signature recognition based on angle feature vector using Artificial Neural Network (ANN). Each signature image will be represented by an Angle vector. The feature vector will constitute the input to the ANN. The collection of signature images will be divided into two sets. One set will be used for training the ANN in a supervised fashion. The other set which is never seen by the ANN will be used for testing. After training, the ANN will be tested for recognition of the signature. When the signature is classified correctly, it is considered correct recognition otherwise it is a failure.

A Fuzzy Dynamic Load Balancing Algorithm for Homogenous Distributed Systems

Load balancing in distributed computer systems is the process of redistributing the work load among processors in the system to improve system performance. Most of previous research in using fuzzy logic for the purpose of load balancing has only concentrated in utilizing fuzzy logic concepts in describing processors load and tasks execution length. The responsibility of the fuzzy-based load balancing process itself, however, has not been discussed and in most reported work is assumed to be performed in a distributed fashion by all nodes in the network. This paper proposes a new fuzzy dynamic load balancing algorithm for homogenous distributed systems. The proposed algorithm utilizes fuzzy logic in dealing with inaccurate load information, making load distribution decisions, and maintaining overall system stability. In terms of control, we propose a new approach that specifies how, when, and by which node the load balancing is implemented. Our approach is called Centralized-But-Distributed (CBD).

A Robust Al-Hawalees Gaming Automation using Minimax and BPNN Decision

Artificial Intelligence based gaming is an interesting topic in the state-of-art technology. This paper presents an automation of a tradition Omani game, called Al-Hawalees. Its related issues are resolved and implemented using artificial intelligence approach. An AI approach called mini-max procedure is incorporated to make a diverse budges of the on-line gaming. If number of moves increase, time complexity will be increased in terms of propositionally. In order to tackle the time and space complexities, we have employed a back propagation neural network (BPNN) to train in off-line to make a decision for resources required to fulfill the automation of the game. We have utilized Leverberg- Marquardt training in order to get the rapid response during the gaming. A set of optimal moves is determined by the on-line back propagation training fashioned with alpha-beta pruning. The results and analyses reveal that the proposed scheme will be easily incorporated in the on-line scenario with one player against the system.

A Dynamic RGB Intensity Based Steganography Scheme

Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.

Removing Ocular Artifacts from EEG Signals using Adaptive Filtering and ARMAX Modeling

EEG signal is one of the oldest measures of brain activity that has been used vastly for clinical diagnoses and biomedical researches. However, EEG signals are highly contaminated with various artifacts, both from the subject and from equipment interferences. Among these various kinds of artifacts, ocular noise is the most important one. Since many applications such as BCI require online and real-time processing of EEG signal, it is ideal if the removal of artifacts is performed in an online fashion. Recently, some methods for online ocular artifact removing have been proposed. One of these methods is ARMAX modeling of EEG signal. This method assumes that the recorded EEG signal is a combination of EOG artifacts and the background EEG. Then the background EEG is estimated via estimation of ARMAX parameters. The other recently proposed method is based on adaptive filtering. This method uses EOG signal as the reference input and subtracts EOG artifacts from recorded EEG signals. In this paper we investigate the efficiency of each method for removing of EOG artifacts. A comparison is made between these two methods. Our undertaken conclusion from this comparison is that adaptive filtering method has better results compared with the results achieved by ARMAX modeling.

Specification of Attributes of a Multimedia Presentation for Presentation Manager

A multimedia presentation system refers to the integration of a multimedia database with a presentation manager which has the functionality of content selection, organization and playout of multimedia presentations. It requires high performance of involved system components. Starting from multimedia information capture until the presentation delivery, high performance tools are required for accessing, manipulating, storing and retrieving these segments, for transferring and delivering them in a presentation terminal according to a playout order. The organization of presentations is a complex task in that the display order of presentation contents (in time and space) must be specified. A multimedia presentation contains audio, video, images and text media types. The critical decisions for presentation construction include what the contents are, how the contents are organized, and once the decision is made on the organization of the contents of the presentation, it must be conveyed to the end user in the correct organizational order and in a timely fashion. This paper introduces a framework for specification of multimedia presentations and describes the design of sample presentations using this framework from a multimedia database.

Interference Reduction Technique in Multistage Multiuser Detector for DS-CDMA System

This paper presents the results related to the interference reduction technique in multistage multiuser detector for asynchronous DS-CDMA system. To meet the real-time requirements for asynchronous multiuser detection, a bit streaming, cascade architecture is used. An asynchronous multiuser detection involves block-based computations and matrix inversions. The paper covers iterative-based suboptimal schemes that have been studied to decrease the computational complexity, eliminate the need for matrix inversions, decreases the execution time, reduces the memory requirements and uses joint estimation and detection process that gives better performance than the independent parameter estimation method. The stages of the iteration use cascaded and bits processed in a streaming fashion. The simulation has been carried out for asynchronous DS-CDMA system by varying one parameter, i.e., number of users. The simulation result exhibits that system gives optimum bit error rate (BER) at 3rd stage for 15-users.

Exploring Dimensionality, Systematic Mutations and Number of Contacts in Simple HP ab-initio Protein Folding Using a Blackboard-based Agent Platform

A computational platform is presented in this contribution. It has been designed as a virtual laboratory to be used for exploring optimization algorithms in biological problems. This platform is built on a blackboard-based agent architecture. As a test case, the version of the platform presented here is devoted to the study of protein folding, initially with a bead-like description of the chain and with the widely used model of hydrophobic and polar residues (HP model). Some details of the platform design are presented along with its capabilities and also are revised some explorations of the protein folding problems with different types of discrete space. It is also shown the capability of the platform to incorporate specific tools for the structural analysis of the runs in order to understand and improve the optimization process. Accordingly, the results obtained demonstrate that the ensemble of computational tools into a single platform is worthwhile by itself, since experiments developed on it can be designed to fulfill different levels of information in a self-consistent fashion. By now, it is being explored how an experiment design can be useful to create a computational agent to be included within the platform. These inclusions of designed agents –or software pieces– are useful for the better accomplishment of the tasks to be developed by the platform. Clearly, while the number of agents increases the new version of the virtual laboratory thus enhances in robustness and functionality.