Paper ref #: 5010

 

Implementing Quantization In JPEG Encoder Using Xilinx FPGA Technology

 

Abdul Hadi Abdul Razak, Mohd Zahid Idris, Adizul Ahmad and Mohd Faizul Md Idrus

 

Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Malaysia.

 

Abstract - Compression of data in digital signal is a must in computer technology nowadays. Compression formats such as MPEG, JPEG, MP3 and others are widely used.  Usually compressions of data are being developed using computer software, and RISC processor in a decoder. Usually this compression codec are being developed using computer software and write using high level programming language such as C/C++. There are also some are being developed using RISC processor and instruction set suitable for data compression. In every format of compression data, Quantization is use as one of the main process in data compression including audio, image and video compression. The purpose of this project is to study the quantization process used in JPEG image compression and implement it on an FPGA by Xilinx. To implement quantization on an FPGA, VHDL language is used to program the behavioral logic design of quantization. Then the design is targeted to implement on 2 Xilinx FPGA technology; Spartan and Virtex model to study the speed of processing and resource utilization on several chips on each model.

 

Paper ref #: 5011

 

ASHAC: An Intelligent System To Classify The Shape Of Aggregate

 

Ariffuddin Joret1, Ahmad Nazri Ali2, Nor Ashidi Mat Isa3, Muhammad Suhaimi Sulong4,

M. Subhi M. Al-Batah5

 

1 & 4 Faculty of Electrical and Electronic Engineering,

Kolej Universiti Teknologi Tun Hussein Onn, 86400 Batu Pahat, Johor, Malaysia.

Email: {1ariff, 4msuhaimi}@kuittho.edu.my

 

2, 3 & 5 School of Electrical and Electronic Engineering,

Universiti Sains Malaysia, Engineering Campus,

14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang, Malaysia.

Email: {2nazriali, 3ashidi}@eng.usm.my, 5abubatah@yahoo.com

 

Abstract - Production of concrete depends on the shape of agregates characteristics. The good concrete must have high ratio of well-shaped aggregate to poor-shaped aggregate in it. In this paper, we develop a system called ASHAC: Aggregate SHApe Classification System, an intelligent system to classify the aggregate into well-shaped and poor-shaped based on neural network using digital image processing technique. The ASHAC have two main components, the features extraction and classification. In the features extraction part, Zernike moment, Hu’s moment, area and parameter have been considered. The extractions of these moments have been calculated based on object’s mass and boundary. For the classification part, the Hybrid Multilayered Perception Network (HMLP) has been developed. The network have been trained using Modified Recursive Prediction Error (MRPE). Based on the performance of classification which is 85.53%, it shows that the ASHAC successfully classified the two categories of aggregate.

 

Index terms - HMLP network, MRPE, ASHAC, digital image processing, moments.

 

Paper ref #: 5012

 

Image Based Intelligent Plaque Diagnosis With Inverse Coefficient Of Variance

 

Hadzli Hashim, Siti Sarah Saaiddutdin and Mohd Nasir Taib

 

ASP Research Group, Faculty of Electrical Engineering,

Universiti Teknologi MARA, 40450 Shah Alam, Malaysia.

Email: hadzli.6-6@ieee.org, hadzli120@salam.uitm.edu.my

 

Abstract - This paper presents the application of Artificial Neural Network (ANN) in the development of an intelligent diagnosis system for selected psoriasis skin disease. Three major types of psoriasis images were analyzed for color features, extracted from RGB Gaussian primary model. Each sampled image was represented by the combination of its histogram’s central location and shape of variance for each color component.  Collections of such parameter known as inverse coefficient of variance (INCV), were trained to produce an optimized ANN model for plaque lesion classification. The proposed model was designed by implementing a multi layer feedforward with backpropagation algorithm and was then evaluated and validated through analysis of the performance indicators regularly applied in medical research such as true and false positive rate displayed in the receiver operating characteristic (ROC) plot.

 

Index terms - ANN, RGB, Gaussian, INCV, ROC.

 

Paper ref #: 5013

 

Robust Trajectory-Adaptive Zero Phase Error-Tracking Control

 

Ramli Adnan1 and Mohd Marzuki Mustafa2

 

1 Faculty of Electrical Engineering, Universiti Teknologi MARA,

40150 Shah Alam, Malaysia.

Email: adnanramli@yahoo.com

 

2 Department of Electrical, Electronic and System Engineering, Faculty of Engineering,

Universiti Kebangsaan Malaysia, 43600 UKM, Bangi, Malaysia.

Email: marzuki@vlsi.eng.ukm.my

 

Abstract - Modern mechanical systems such as machine tools and mechanical manipulators have become an expanding field in industrial applications and need to be supported with motion control. Precision and accuracy requirements become more and more stringent because of factors like reduction of components size in modern mechanical devices and high-quality surface-finishing product requirements. These situations have caused demands for advanced control strategies especially in high-precision, high-speed and robust tracking control. Based on these scenario, a robust trajectory-adaptive zero phase error tracking control (ZPETC) without factorization of zeros polynomial was proposed. The proposed ZPETC has been successfully developed and tested. Simulation results show that the proposed ZPETC performed better than the existing algorithm in terms of robustness due to the presence of zeros outside the unity circle and changes in the nature of trajectory signal. The effectiveness of the proposed ZPETC has been tested on an industrial standard hydraulic actuator.

 

Index terms - feedforward controller, adaptive control, zero phase error-tracking controller, motion controller.

 

Paper ref #: 5014

 

A Theoretical Study of Electromagnetic Transients in a Large Conducting Plate due to Current Impact Excitation

 

Saurabh Kumar Mukerji et al.

 

Faculty of Engineering and Technology, Multimedia University,

75450 Melaka, Malaysia.

 

Abstract - Maxwell’s equations are solved to determine transient electromagnetic fields inside as well as outside a large conducting plate of an arbitrary thickness.  The plate is carrying a uniformly distributed excitation winding on its surfaces.  Transient fields are produced due to sudden interruption of the d.c. current in the excitation winding.  On the basis of a linear treatment of this initial / boundary value problem it is concluded that the transient fields may decay at a faster rate for plates with smaller value of relaxation time.  It is also shown that the energy dissipated in the eddy current loss may exceed the energy stored in the initial magnetic field.

 

Paper ref #: 5015

 

Design and Simulation of MOS-CML 1:2 Frequency Divider

 

I. Dzulkarnain and M. Awan

 

Electrical and Electronics Engineering Programme, Universiti Teknologi PETRONAS,

31750 Tronoh, Bandar Seri Iskandar, Perak Darul Ridzuan, Malaysia.

Email: iskandar_dzulkarnain@utp.edu.my, mohdawan@petronas.com.my

 

Abstract - This paper describes the design and simulation of a 1:2 static frequency divider using the metal-oxide semiconductor (MOS) current mode logic (CML) based on a 0.5-µm process parameters. It is inherent in all communication systems that a frequency divider is part of a frequency synthesizer block. The input signal is obtained from a voltage controlled oscillator (VCO). The signal will be divided into smaller frequencies to be processed by the remaining blocks of the communication system. The topology of the divider circuit is based on a Master–Slave Flip – Flop (MS–FF) which is basically a two level–triggered D latches with the second latch provides a positive feedback to the first latch. This MS–FF constitutes the first stage and divides the frequency by two. The design methodology involves finding the suitable design parameters and biasing of the circuit from the basic differential pair to the D-latch and D-flip-flop. To show the frequency divider is working the designed circuit is simulated using PSpice with 1GHz sine wave and 500MHz square wave inputs signal. The simulation shows the circuit divides the frequency by two and consumes 1.27mW for 1GHz operation.

 

Paper ref #: 5016

 

Data Entry System using Handwriting Recognition Techniques

 

Poo Hwei Nee

ViTrox Technologies Sdn. Bhd.

5, Lintang Bayan Lepas 2, Bayan Lepas Industrial Park, Phase 4,

11900 Bayan Lepas, Penang, Malaysia.

Email: poo_hwei_nee@yahoo.com

 

Patrick Sebastian , Yap Vooi Voon

Electrical and Electronic Engineering Department, Universiti Technologi PETRONAS,

Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan, Malaysia.

Email: patrick_sebastian@petronas.com.my , vooiv@petronas.com.my

 

Abstract - The aim of this paper is to use a combination of handwriting recognition and neural network techniques to produce a student coursework database. The proposed method utilizes two cameras to capture the images. Images captured are processed to determine the region of interest (ROI) and to remove noise. Distinctive features from each character are extracted using the combination of five feature extraction modules. The extracted feature matrixes are used as inputs to a Neural Network (NN). The neural network scheme employs the Multi Layer Feed Forward Network as the character classifier. This network is trained using the Back-Propagation algorithm to identify similarities and patterns among different handwriting samples.  The system is able to recognize the handwriting of different sizes and styles written using any medium. The system can achieve accuracy rate as high as 88.5% for untrained inputs and 93.83% for trained inputs.

 

Paper ref #: 5017

 

A Seugeno Fuzzy Logic Based Technique For Root Mean Square (RMS) Variations Categorization

 

Ahmad Faizal b. Kamal Bahrein and Ahmad Farid b. Abidin

 

Universiti Teknologi MARA, Kampus Dungun,

23000 Dungun, Terengganu, Malaysia.

Email: ahmad924@tganu.uitm.edu.my

 

Abstract - Modem electronic equipment is much more sensitive to such disturbances than traditional loads (lighting and motors) An increasing usage of sensitive electronic equipment power quality has become a major concern now. One critical aspect of power quality studies is the ability to perform automatic RMS variations data analysis and categorization. Any variations in voltage, current, or frequency which may lead to an equipment failure or malfunction is potentially a root mean squares (RMS) variations problem. The classification and identification of RMS voltage waveform variations are governed by certain standards. The major cause of the problem is the increase in loads, which distort current and voltage waveforms.. RMS voltage variations include variations in the fundamental frequency voltage. These variations are best characterized by plots of the RMS voltage vs. time but it is often sufficient to describe them by a voltage magnitude and a duration that the voltage is outside of specified thresholds. The objective of this paper is to present a technique based on Seugeno fuzzy logic to categorize root mean squares (RMS) variations events using on the RMS waveform. The various data voltage signal variations such as voltage sag, swell, interruption and normal are then classified by a Seugeno fuzzy logic decision system using the fuzzified values of maximum and the minimum magnitude of the voltage component. Inherent features are extracted from the signal taken by using the reliable power meter (RPM) and fed into a fuzzy system. The categorization has been implemented using the RMS variations voltage waveform and fuzzy logic toolboxes in MATLAB. The root mean squares (RMS) voltage waveform is applied most broadly in power system monitoring and measurement. The RMS voltage can be calculated from time sequence based on the data integral of the voltage signal. For this RMS waveform classification, the maximum and minimum value of waveforms magnitude are being selected as fuzzified input in the membership function in the fuzzy system.

 

Index terms - power quality, RMS disturbances, fuzzy logic, disturbances, sensitive electronic equipment.

 

Paper ref #: 5018

 

Fuzzy Logic Controller with Variable Reference

 

Ismail Adam

Electronic Department, UniKL-BMI,

Bt 8, Jln Sg Pusu, Gombak, 53100 Selangor, Malaysia.

Email: ismail_adam@yahoo.com

 

Abstract - The work in this paper attempts to model and design a simple Variable Reference (VR) Fuzzy Logic Controller. The adaptability stems from two known variables – desirable system input and system feedback. To do so a simple fuzzy rule-based technique is used to adjust gradually the system input to a value that gives better performance than the conventional fuzzy logic controllers. The designed controller is implemented and the output response is analyzed and compared to the performances of the conventional fuzzy logic controllers. The novelty of this work lies with the fact that it gives better performance by using less number of rules.

 

Paper ref #: 5019

 

Adaptive Real-Time Stepper Motor Speed Control

 

Ismail Adam

Electronic Department, UniKL-BMI,

Bt 8, Jln Sg Pusu, Gombak, 53100 Selangor, Malaysia.

Email: ismail_adam@yahoo.com

 

Abstract - Adaptive real-time stepper motor speed control is based on the slight modification on the input stimulated real-time stepper motor control. It takes advantage on the programmability of the microcontroller and its support modules to address problems on real-time implementation. The concept developed up to real-time software and hardware implementations were presented.

 

Paper ref #: 5020

 

Real -Time Monitoring And Controlling Using Petri Net Algorithm For Batch Process Plant

 

M. Shaiful1 and Yusof Md Salleh2

 

1 Department of Electrical Engineering

University Kuala Lumpur – British Malaysia Institute

Batu 8 Jalan Sungai Pusu, 53100 Gombak Selangor, Malaysia.

Email : shaiful@bmi.edu.my , mdshaiful@hotmail.com

 

2 Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Selangor, Malaysia.

Email : dekanfkje@salam.uitm.edu.my

 

Abstract - Most companies have recognized the major role of modelling in batch process plant and control system. Improved strategies frequently follow from accurate process understanding through modelling. However, due to the complexities of the batch processes and due to the unavailability of suitable models, the idea of modelling has left behind. In this paper, Petri net as the modelling formalism has been chosen. It is not only a powerful tool in the study of discrete-event system but also an extremely flexible graphical and modelling device that is amenable to control analysis. First, the technique offers way to gain process understanding by observing plausible modes of behaviour. Secondly, the technique can be used to test the effect of proposed change on the models that have been found to respond well like the process .To see the goodness of this method, the modelling technique is applied to a single product batch plant and test result from a plant simulation will be presented. This includes reducing the time taken required in producing a product. Most process system is continuous and is practically being used for many years and the advent of modern processing systems has significantly minimised the expenditure of two main resources: man power and time. However, recently there is an interest in using Petri net algorithm for batch processing due to, high flexibility in producing multiple products in a single plant through sharing of process equipment and easy to monitor and identified the process flow in the system.

 

Paper ref #: 5021

 

     Fuzzy Classification of EEG Mental Task Signals for a Brain Machine Interface

 

Hema C.R., Paulraj M.P., Sazali Yaacob, Abd. Hamid and Nagarajan R.

 

School of Mechatronic Engineering,

Northern Malaysia University College of Engineering (KUKUM),

Kangar, Perlis, Malaysia.

Email: hema@kukum.edu.my

 

Abstract - Peripheral nerve injuries and disorders disrupt the functions of the motor nerves and communication channels of the patients. In spite of an active brain and awareness of their surroundings, these patients are generally called locked in people as they have lost all communications with the external world. Modern life support systems and medication have increased the life span of such patients however their agonies have been extended as they are entirely dependent on their caretakers for their existence.  Studies show that a brain machines interface [BMI] can provide a digital channel for communication in the absence of the biological channels. Such BMIs are under active research as they are considered as potential devices to rehabilitate patients with motor nerve disorder. A BMI can be designed using the electrical activity of the brain detected by EEG electrodes to control external devices. In this paper a fuzzy classifier is proposed for classification of EEG signals related to mental tasks.

 

Index terms - Brain Machine Interfaces, EEG Signal Processing, Fuzzy Classifier, Rehabilitation Robotics.

 

Paper ref #: 5022

 

Voice-Based Intelligent Access Control System for Building Security using Adaptive Network-based Fuzzy Inference System (ANFIS)

 

Wahyudi, Syazilawati Mohamed, R. Muhida, M.A.S Kamal and M.J.E. Salami

 

Intelligent Mechatronics System Research Group, Department of Mechatronics Engineering,

International Islamic University Malaysia. P.O. Box 10. 50728, Kuala Lumpur, Malaysia.

 

Secure systems, data and buildings are currently protected from unauthorized access by a variety of devices. Even though there are many kinds of devices to guarantee the system safety such as PIN pads, keys both conventional and electronic, identity cards, cryptographic and dual control procedures [1], the people voice can also be used. The ability to verify the identity of a speaker by analyzing speech, or speaker verification, is an attractive and relatively unobtrusive means of providing security for admission into an important or secured place. An individual’s voice cannot be stolen, lost, forgotten, guessed, or impersonated with accuracy [2]. Due to these advantages, this paper describes design and prototyping a voice-based access control system for building security. In the proposed system, the access may be authorized simply by means of an enrolled user speaking into a microphone attached to the system. The proposed voice-based access control system analyses the characteristics of that voice sample to determine if there is a sufficient match to a set of characteristics analyzed and extracted at the time of enrolment of that user. The proposed system then will decide whether to accept or reject the user’s identity claim or possibly to report insufficient confidence and request additional input before making the decision. Furthermore, to identify the authorized and unauthorized people, adaptive Network-based Fuzzy Inference System (ANFIS) is adopted as pattern matching method.

 

Paper ref #: 5023

 

Computer Aided Fuzzy Logic Based System for the Identification of Malaria Parasites

 

Toha S.F.1 and Ngah U.K.2

 

1 Department of Mechatronics Engineering, Faculty of Engineering,

International Islamic University Malaysia, Jalan Gombak, 53100 Kuala Lumpur, Malaysia

Email: tsfauziah@iiu.edu.my

 

2 School of Electrical and Electronics Engineering,

Universiti Sains Malaysia, Seri Ampangan, 14300 Nibong Tebal, Pulau Pinang, Malaysia.

Email: umik@eng.usm.my

 

 

Abstract - This paper is focuses on the application of computer aided medical diagnosis for Malaria parasites identification using digitized microscopic images of human blood specimens. Currently in Malaysia, the traditional method for the identification of Malaria parasites requires a trained technologist to manually examine and scrutinize the Malaria parasites species subsequently by reading the slides. This is a very time consuming process, causes eye fatigue, is prone to human errors and inconsistency. An automated system is therefore needed to complete as much work as possible and a fuzzy approach is used. Fuzzy Logic Theory and Digital Image Processing are used as the main techniques in detecting the species and development phase of Malaria parasites using C++ programming. The use of C++ as the main programming language permits portability of the program so as to provide easy future software expansion. The integration of fuzzy sets with other soft computing tools has been successfully designed with the capability to improve the quality of the image, analyze and classify the image as well as calculating the number of Malaria parasites and last but not least, to  identify the type and development stage of Malaria parasites.

  

Keywords - Fuzzy Logic, Digital Image Processing, Fuzzy C-Mean Clustering, Grey Level Co-Occurrence Matrix.

 

Paper ref #: 5024

 

Analytical Technique for Copper Traceability by Using VPD-TXRF Methodology

 

Hasmayati Abu Bakar1, Zaiki Awang1 and Wan Ab Aziz Wan Razali2

 

1 Microwave Technology Centre, Faculty of Electrical Engineering,

Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia.

 

2 Silterra (M) Sdn Bhd, 09000 Kulim, Kedah, Malaysia.

Email: hasmayati_bakar@silterra.com, wan_ab@silterra.com

 

Abstract - Most of the semiconductor industry is in the midst of a massive technology change, converting to mass-producing chips with Copper (Cu), rather than Aluminium (Al) as an interconnect element. Even though Cu able to conduct electricity better than Al but it has setback, especially when processing in mix of Al and Cu manufacturing environment because Cu element is easy to diffuse quickly and then can allow residue of Cu in numerous area. Accordingly, preventing and monitoring of Cu cross contamination become critical for the semiconductor industries. Presently, Cu contamination is measure with inductively coupled plasma mass spectroscopy (ICPMS) that uses plasma to generate ion. However, analytic solution must be kept low or else instrument performance is adversely affected. This in turn requires the need to improve the performance of analytical tool and to develop a new analytical measurement methodology for evaluating the levels of Cu contamination.   In order to solve this problem, new assessment methodology of using vapor phase decomposition and total X-ray fluorescence (VPD-TXRF) was introduced with the aim of establishing a trace elemental analysis. Currently, VPD-TXRF is mainly used as an analyzing tool to monitor Cu contamination on a wafer surface. However in this research, the same methodology is proof to be useful for measuring traces of Cu element induced from the other sources of contamination which is not necessarily from the wafer itself but also from the other sources in the whole fab.  A standard procedure and comparable sample also has been used ICPMS analyzing in purpose to set up a correlation with data collected from VPD-TXRF measurement and then aid to develop a valid procedure for Cu traces analysis, which served as a reference procedure to measure, monitor and control a Cu contamination. Beside that, this methodology offers a cost saving since it uses a reclaim wafer as a sample preparation. Thus, this methodology is a key and can be used as process knowledge in accelerating advanced Cu interconnects development.

 

Index Terms - Copper, cross contamination, vapor phase decomposition (VPD), total X-ray fluorescence (TXRF).

 

Paper ref #: 5025

 

 

 

Paper ref #: 5026

 

Design and Simulation of PID Type Fuzzy Logic Controller Using VHDL

 

Md. Shabiul Islam1, M.S. Bhuyan1, Md. Saukat Jahan1, Tan Boon Siang1,  Masuri Othman2

 

1 Faculty of Engineering, Multimedia University,

63100 Cyberjaya, Selangor, Malaysia.

Email: shabiul@mmu.edu.my

 

2 Department of Electrical, Electronics and System Engineering,

University Kebangsaan Malaysia, 43600 UKM, Bangi, Selangor, Malaysia.

 

Abstract - This paper describes the hardware implementation of a PID-type (Proportional-Integral-Derivative) Fuzzy Logic Controller (FLC) using VHDL to use in transportation cruising system. The cruising system based on Fuzzy algorithm has been developed to avoid the collisions between vehicles on the road. The PID-type FLC provides a reference for a car to either increase or decrease the speed of the vehicle depending on the distance of the preceding vehicle when it gets too close or alert the driver when necessary. The PID-type Fuzzy Controller algorithm is first developed using Matlab platform. Then the Mamdani Fuzzy Inference is studied and applied to design the PID-type Fuzzy controller hardware modules of the FLC architecture. This architecture then has simulated and designed using VHDL from Altera environment. Comparison of simulation results between Matlab and VHDL will be presented in the full paper. The motivation in designing is the Fuzzy based PID cruising controller is cheaper controller in cost and thus make it available to the entry-level vehicles such as the national car. This can be further reduced the road accident and ensure the safety of the road users in the future.

 

Paper ref #: 5027

 

Modeling of Mobile Robot Controller Using VHDL

 

Md. Shabiul Islam1, Md. Saukat Jahan1, Md. Anwarul Azim1 and Masuri Othman2

 

1 Faculty of Engineering, Multimedia University, 63100 Cyberjaya, Selangor, Malaysia.

Email: shabiul@mmu.edu.my

 

2 Department of Electrical, Electronics and System Engineering,

University Kebangsaan Malaysia, 43600 UKM, Bangi, Selangor, Malaysia.

 

Abstract - This paper describes Fuzzy Logic algorithm for designing an autonomous mobile robot controller (MRC). The controller enables the robot to navigate in an unstructured environment and avoid any encountered obstacles without human intervention. A behavioral model of MRC was developed in Matlab and tested with numerous data to evaluate its algorithm functionality. The codes are written to implement each of the separate modules that exist in a Fuzzy Logic Controller (FLC). The modulus are Fuzzifier, Fuzzy Rule Base, Inference mechanism and Defuzzifier. The Fuzzifier is made to translate crisp input values from the sensors into a degree of linguistic terms. Decisions making is based on the Fuzzy Rule Base while the inference mechanism determines which rules is applicable. The center of Gravity (COG) defuzzification is applied to obtain the final output, which is the orientation of the robot. The autonomous mobile robot is found to be able to react to the environment appropriately during its navigation to avoid crashing with obstacles by turning to the proper angle while moving. Finally Fuzzy Logic has proven a commendable solution in dealing with certain control problems when the situation is ambiguous. This works demonstrate the design of the FLC of an autonomous mobile robot with Matlab simulation. The modeling of MRC has performed to design hardware architecture using VHDL from Altera environment. The comparison simulation results between Matlab and VHDL will be given in the full paper.

 

Paper ref #: 5028

 

On the Security of a Recent Chaotic Cipher

 

Muhammad Asim and Varun Jeoti

 

Electrical & Electronics Engineering Programme,

University Teknologi PETRONAS

Email: varun_jeoti@petronas.com.my

 

 

Abstract - In recent years chaotic cryptography has attracted significant attraction of the researchers for the secure transmission of data. A new chaotic cipher titled “Cryptography using multiple one-dimensional chaotic maps” was proposed in [1] for encryption of image, video, text etc. This paper shows that the proposed cipher in [1] is weak against the known-plaintext attack and also points out symptoms in the extraction of initial condition step of the proposed cipher.

 

REFERENCES

[1] N.K. Pareek, V. Patidar, K.K. Sud, “Cryptography Using Multiple One Dimensional Chaotic Maps” Communications in Nonlinear Science and Numerical Simulation, Vol. 10, Issue 7, October 2005, pp: 715-723.

 

Index terms - chaos, cryptanalysis, security.

 

Paper ref #: 5029

 

Comments on Fei-Qui-Min Chaotic Cipher

 

Muhammad Asim and Varun Jeoti

 

Electrical & Electronics Engineering Programme,

University Teknologi PETRONAS

Email: varun_jeoti@petronas.com.my

 

Abstract - In recent years chaotic cryptography has attracted significant attraction of the researchers for the secure transmission of data. A new chaotic cipher titled “An image encryption algorithm based on mixed chaotic dynamic systems and external key” was proposed in [1] for secure transmission of digital images. This paper points out symptoms in the sub-algorithm of the proposed cipher, used for the extraction of initial conditions from the external secret key.

 

REFERENCES

[1] P. Fei; S.S. Qui and Long Min, “An Image Encryption Algorithm Based on Mixed Chaotic Dynamic Systems and External Key”, Proc. Of IEEE International Conference on Communications, Circuits and Systems, Vol. 2, 27-30 May 2005, pp: 1135-1139.

 

Index terms - chaos, cryptanalysis, security.

 

Paper ref #: 5030

 

A Comparison Study of Directional Medium Access Control Protocols with Smart Antennas for Mobile Ad hoc Network

 

Jackline Alphonse Guama Shulle and Mohamad Naufal Mohamad Saad

 

Department of Electrical and Electronics Engineering,

Universiti Teknologi PETRONAS, 31750, Tronoh, Perak, Malaysia.

 

Email: jackline_alphonse@utp.edu.my, naufal_saad@petronas.com

 

Abstract - Smart Antenna uses a predetermined set of antenna elements in an array. The signals from these antenna elements are combined to form a changeable beam pattern that can be steered, using either Digital Signal Processing (DSP) or Radio Frequency (RF) hardware, to a desired direction that follow mobile units as they move. Smart Antennas in Mobile Ad-hoc Networks offer many benefits compared with classical omni-directional antennas. Unfortunately, Smart Antennas cause some serious problems in Ad-hoc environment. These problems are the increase of the instances of hidden terminal, deafness and the problem of the determination of neighbor’s location. To overcome these problems, many directional Medium Access Control protocols were designed. These protocols were classified and compared, but none have been upgraded. In this paper, we conducted a comparison study of the existing directional MAC protocols that use Smart Antennas by contrasting their features.  We found that, most of these protocols are not applicable in mobile environment and these problems remain unsolved.  This investigation discusses the challenges in improving some of these directional antenna based MAC protocols. We proposed to upgrade two protocols, Multihop-RTS MAC protocol and Angular- MAC protocol. We proposed to improve the performance of these two directional MAC protocols by using Adaptive Array antennas and two omni-directional control packets. These control packets are; Busy Node List (BNL) and Overall Link State Table (OLST).  We suggested that using these two packets and Adaptive Array antenna in the above protocols   will make them applicable in mobile environment and as well as improve routing performance.

 

Paper ref #: 5031

 

Modeling of Shunt Active Power Filter with Improved Control Algorithm for Power Quality Improvement

 

Norani Atan, Shantini P.Raj, Zahrul F. Hussein and Izham Z. Abidin

 

Department of Electrical Engineering,

Universiti Tenaga Nasional, Malaysia.

 

Abstract - This paper presents the implementation of a new control algorithm for a three-phase shunt active power filter to regulate load terminal voltage, eliminate harmonics, and improve the power factor in systems with an uncontrolled rectifier and an AC controller as the non-linear loads. Different methods are used to control the active power filters. The reference current to be detected from the load current and processed by the active power filter controller is obtained from four different control algorithms, namely the reactive power theory, extension reactive power theory, synchronous reference frame, and extension synchronous reference frame. There are two types of power supplies that are studied for each method, which are: sinusoidal and balanced, and non-symmetrical. The system is modeled and simulated using MATLAB/Simulink simulation package with a shunt active power filter to compensate for the harmonics current injected by the loads. It is then interfaced and verified using a Real-Time Digital Signal Processor DS1104 accompanied by an application (dSPACE) that is employed to view the simulation results and control the parameters of the simulation. The results will be presented in the form of spectrum waveform and total harmonic distortion.

 

Paper ref #: 5032

 

Smooth Fractal Techniques for EEG Data Analysis

 

Rosniwati Ghafar, Fatimah Abdul Hamid, Aini Hussain and Salina Abdul Samad

 

Department of Electrical, Electronics & Systems Engineering,

Faculty of Engineering, Universiti Kebangsaan Malaysia, 43600 Bangi, Malaysia.

 

Email: rosni75@yahoo.com

 

Abstract - Electroencephalography (EEG) is an important tool in detecting abnormal activity in the brain. It is more preferable because it is noninvasive and do not have a harmful side effects compared to other methods such as magnetic resonance images (MRI) or computed tomography (CT). Another advantage of EEG is that it can display the reaction of the brain up to milliseconds. However, this particular feature of EEG could pose as a disadvantage especially when dealing with epilepsy detection. For instance, an epilepsy patient sometimes goes through days of recording in order to record and capture the abnormal signal of seizure attack. As a result, there is abundance and redundant data that need to dealt with so as to produce the diagnosis. In this paper we intend to report our finding regarding the processing and analysis of the EEG data from epilepsy patients using fractal method. We will provide proof that the smooth fractal analysis technique combined with the sliding window analysis can be used to detect and determine the abnormal region thus facilitates analysis of the lengthy recorded EEG data. Several fractal algorithms namely Kart, Higuchi and Petrosian were considered in this study. We also performed statistical analysis on the recorded EEG data.

 

Index terms - electroencephalogram, fractal dimension, epilepsy and seizure detection.

 

Paper ref #: 5033

 

Support Vector Machine for Human Recognition

 

Nooritawati Md Tahir, Aini Hussain,  Salina Abdul Samad, Anuar Mikdad Muad and Hafizah Husain

 

Signal Processing Research Group, Dept. of Electrical, Electronics and Systems,

Faculty of Engineering, Universiti Kebangsaan Malaysia, 43600 Bangi, Malaysia.

 

Email: norita_tahir@yahoo.com, norita@vlsi.eng.ukm.my

 

Abstract - This paper investigated application of a machine learning approach namely the Support Vector Machine (SVM) for the recognition of human and non-human. Much work has shown that the silhouette contour of a shape contained essential shape information. Therefore, a suitable scheme, which we named as centroidal gait profile, is developed representing the Euclidean distance between the centroid of a shape and the shape’s boundary pixels.  The centroidal gait of 100 human and non-human at 10 degree interval were extracted and statistically analyzed for developing the gait models. Since the number of features for the gait profile is 36, it is likely the case that some of the features are more significant for recognition than some other features in the profile. One method to test if a set of features is significant for recognition is to apply analysis of variance (ANOVA). ANOVA is a standard technique for measuring the statistical significance of a set of independent variables in predicting a dependent variable. The measure that ANOVA produces is the p-value for the features set and the class variable. In this work we will illustrate the intuition behind using the p-value for determining the optimize number of features with a concrete and simple example. The ANOVA test indicated 19 features as significant, which could be categorized to four regions. These regions will feat as inputs to the SVM. The classification ability of the SVM was found to be unaffected across three kernels function namely linear, polynomial and Gaussian radial basis. A feature selection algorithm demonstrated that as little as four centroidal gait features could effectively distinguish human with over 80% accuracy. These results demonstrate considerable potential in applying SVM in human detection for many applications.

 

Index Terms -  Support vector Machine, Gait, Classification, ANOVA

 

Paper ref #: 5034

 

A DWT-based image watermarking using CDMA technique

 

Wan Azizun Wan Adnan1 and S. Abdul-Kareem2

 

1 Department Of Computer And Communication Engineering

Faculty of Engineering, University Putra Malaysia, Malaysia.

Email: wawa@eng.upm.edu.my

 

2 Faculty of Computer Science and Information Technology,

University Malaya, Malaysia.

Email: sameem@um.edu.my

 

Abstract - Embedding watermark in DWT (Discrete Wavelet Transform) domain using CDMA(Code Division Multiple Access) techniques is  proposed in this paper.  By modulating selected DWT coefficients of the image, the CDMA encoded watermark is embedded into the domain.  The experimental results demonstrate that the watermark with the proposed algorithm satisfied imperceptibility and robustness requirements. We also investigate the effect on the quality of the watermarked image if different mother wavelets are used.

 

Paper ref #: 5035

 

Skin Color Segmentation using Principal Component Analysis

 

Hermawan Nugroho and Ahmad Fadzil MH

 

Universiti Teknologi PETRONAS, Malaysia.

Email: hermawan_nugroho@utp.edu.my, fadzmo@petronas.com.my

 

Abstract - In this paper, we describe a color image segmentation method to identify areas of skin that has undergone repigmentation in Vitiligo cases. In Vitiligo, areas of skin become white due to lack of melanin. The treatment of Vitiligo causes skin repigmentation (normal skin color). It is difficult to measure the repigmentation visually during treatment because repigmentation progress is slow and occurs in distributed manners. In this work, we develop a technique based on Principal Component Analysis to extract color information in order to highlight the repigmentation process. Initially RGB skin data is transformed into optical density RGB domain to take account of Gamma effect due to CCD. The Principal Component Transformation (PCT) is used to transform the optical density RGB data into its principal components. Thresholding is then performed on the first principal component. The threshold value is optimal when the variance between classes is maximal (by the PCT process). Result shows that the segmentation of skin repigmentation and Vitiligo patches with our method gives better result than the segmentation in monochrome skin image.

 

Paper ref #: 5036

 

Segmentation and Feature Extraction of Coronal Streamers for GDV-gram Analysis

 

Mohd Zulfaezal Che Azemin

 

Multimedia University, 63100 Cyberjaya, Selangor, Malaysia.

 

Abstract - The corona-like images generated by Gas Discharge Visualization (GDV) technique or known as GDV-grams are proven to have medical diagnostic value. Unfortunately, lack of research on feature extraction techniques constrains the analysis of the images to only parameters that have been developed for commercial use. Even if it does exist in academic world, it is not based on latest discovery and new scientific findings. In this text, we outline methods to extract useful information by applying morphological image processing whose inputs and outputs are digital images. In the next step, we introduce our own method in accordance to existing literatures for GDV-gram segmentation whose inputs are images, but the results are numerical figures extracted from those images.  We then apply those attributes to mathematical formulas which are acquired from separate researchers.

 

Index terms - Image segmentation, Corona, Feature extraction, Morphological operations, Gas Discharges.

 

Paper ref #: 5037

 

Segmentation Based on CIE L*a*b* Color Space

 

Dani Ihtatho and Ahmad Fadzil M.H.

 

Universiti Teknologi PETRONAS, Malaysia.

 

Email: dani_ihtatho@utp.edu.my, fadzmo@petronas.com.my

 

Abstract - Segmentation process in image processing can be done in various color space. For some applications, segmentation based on monochrome images is sufficient to distinguish region of interest (ROI) from the background. However, many applications require color information for accurate segmentation. Selecting suitable color space that allows effective segmentation between ROI and background is an important task. An effective way to analyze dissimilarity between ROI and background on particular color space is by looking at distribution of pixel values on histogram. High dissimilarity will give bimodal histogram which enables easy and accurate choice of threshold values. In this work, the type of images that will be analyzed is the one which ROI and background can be distinguished by their redness. Segmentation of the ROI from their background for these images based on three color spaces, RGB, HSI, and CIE L*a*b* are compared. On each color space, segmentation is performed on band that contains red color information. Red band on RGB color space represents red component of a color. On HSI color space, hue represents dominant wavelength of a color and saturation represents the purity of a color. On a* band of CIE L*a*b* color space, negative value represents degree of greenness and positive value represents degree of redness. Results show that segmentation on a* band of CIE L*a*b* color space give the most consistent result. The segmentation method has wide applications in dermatology, cosmetics and manufacturing-based applications.

 

Paper ref #: 5038

 

Combinational Component Selection for Single Trial Analysis of Visual Evoked Potential Signals for Brain Computer Interface

 

S. Andrews1, Andrew Teoh1, Loo Chu Kiong2 and Ramasamy Palaniappan3

 

1 Faculty of Information Science and Technology, Multimedia University,

Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia.

Email: andrews.samraj@mmu.edu.my, bjteoh@mmu.edu.my

 

2 Faculty of Information Science and Technology, Multimedia University,

Jalan Ayer Keroh Lama, 75450 Bukit Beruang, Melaka, Malaysia.

Email: ckloo@mmu.edu.my

 

3 Dept. of Computer Science, University of Essex,

Colchester, C04 3SQ, United Kingdom.

Email: rpalan@essex.ac.uk, palani@iee.org

 

Abstract - In single trial analysis, when using Principal Component Analysis (PCA) and Independent component analysis (ICA) to extract Visual Evoked Potential (VEP) signals, the selection of principal components (PCs) is an important issue. We propose a new combinational method here that selects only the appropriate PCs and enhance the performance of VEP signals. We denote the method as Combinational component selection (CCS). In this method, the VEP is reconstructed based on the pre-processing by a special type of PCA variant followed by ICA. When this technique is applied on emulated VEP signals added with background electroencephalogram (EEG), with a focus on extracting the evoked P3 parameter, it is found to be feasible. By this way we achieved improvement in signal to noise ratio (SNR), which is superior to other existing methods of signal extraction. The SNR is then compared with high noise factors (i.e. EEGs), to test the robustness of our CCS method, and found the results are more impressive in such cases. Next, we applied CCS method to real VEP signals to analyse the P3 responses for target and non-target stimuli. The P3 parameters extracted through our proposed CCS method showed higher P3 response for target stimulus, which confirms to the existing neuroscience knowledge.

 

Index terms - Electroencephalogram, P3, Single trial VEP, ICA, PCA.

 

Paper ref #: 5039

 

Performance Analysis of Output Shifted Coding Modulation for Decoding π/4 Shift DQPSK Signal

 

Roslina Mohamad, Nuzli Mohamad Anas and Kaharudin Dimyati

 

Department of Computer Engineering, Faculty of Electrical Engineering,

Universiti Teknologi MARA, 40450 Shah Alam, Malaysia.

 

Department of Electrical Engineering, University of Malaya,

50603 Kuala Lumpur, Malaysia.

Email: rosalina@perdana.um.edu.my, kahar@um.edu.my

 

Abstract - This paper presents a new method to decode π/4 shift DQPK signal named as Output Shifted Coding Modulation (OSCM). OSCM is a combination of channel coding and modulation or known as coded modulation. Currently, received signal from π/4-shift DQPK demodulator is converted to binary signal for hard decision decoding or quantized signal for soft decision decoding. The decoding methods that usually used are Viterbi decoder (VD) and Turbo Codes. Although turbo codes are powerful coding, the implementation are complicated compared to VD. For this new method, received signal from demodulator is directly decoded by OSCM decoder using modified trellis diagram. For error correction calculation, OSCM introduced two types of decision methods which are multilevel and soft decision. This algorithm had been tested on demodulator signal which is corrupted by AWGN noise. The result of simulation and calculation shows that BER for OSCM are improving compared to hard and soft decision Viterbi decoder. From the analysis, we get 3 dB and 0.8 dB performance difference at BER=10-3 between hard and soft decision, k=3, respectively.

 

Index term - π/4 shift DQPK signal, Convolutional codes, Viterbi Algorithm, AWGN noise.

 

Paper ref #: 5040

 

Comparison and Analysis of Routing Protocols for Wireless Mesh Networks

 

Mohamed ELshaikh,  Nidal Kamel and Azlan Bin Awang

 

Dept. of Electrical & Electronics Engineering, Universiti Teknologi PETRONAS,

Seri Iskandar, 31750 Tronoh, Perak, Malaysia.

Email: mohamed_elshaikh@utp.edu.my, {nidalkamel, azlanawang}@petronas.com.my

 

Abstract - Wireless mesh networks (WMN) is considered a special case of mobile Ad-Hoc networks (MANET). The routing protocols that are used for WMN were initially developed for MANET, where the route stability when the nodes are mobile is the main issue. The routing protocols for MANET can be classified into two main groups; reactive routing protocols, where routes are established on demand and proactive routing protocols, where nodes periodically exchange routing table and maintain the entire topology of the network, with each node knowing the shortest path to each node in the network. In this paper, we compare the performance of various commonly used MANET reactive and proactive routing protocols for WMN environments. The Ad Hoc On-Demand Vector (AODV), the Dynamic Source Routing (DSR) and the Optimized Link State Routing (OLSR) routing protocols are considered for comparison. The OPNET software is used for simulation, and the network throughput is used as a performance metric. The simulation results show the OLSR proactive routing protocol has better performance in comparison with the other MANET reactive routing protocols (DSR and AODV) in the WMN environments.

 

Paper ref #: 5041

 

On-Line Identification Of Dynamic Systems

 

Farid Ghani, Fellow, IET and Lim Bee Wen

 

School of Electrical and Electronic Engineering,

Univesiti Sains Malaysia, 14300 Nibong Tebal, Penang, Malaysia

Email: fghani@eng.usm.my, leejiawei83@yahoo.com

 

Abstract - In many applications of control and communication systems it is required to measure the process dynamics while the system is in operation. Correlation techniques using pseudo random binary sequences (PRBS) for the measurement of discrete weighting sequence of in-operation systems have been widely studied and used in practice. However, in such measurements involving PRBS, the measurement time is large due to the fact that the measurements can only be taken once the system has reached its periodic steady state after the input is applied. Moreover, the period of PRBS has to be large compared to the largest time constant present in the system. This makes correlation techniques using PRBS undesirable in cases where the system dynamics changes rapidly with time and for adaptive control purposes. In this paper an alternative scheme is suggested that involves the use of a-periodic test sequences of finite duration for the measurement of discrete weighting sequence of an in-operation dynamic system. The system is here perturbed with a carefully chosen sequence of finite duration and the response of the system is processed using an appropriate digital filter to give the required weighting sequence of the system. The use of finite duration test sequences instead of the PRBS considerably reduces the measurement time and makes the proposed scheme both effective and attractive. The scheme has been theoretically analyzed and also simulated using MATLAB and SIMULINK for different systems. Several different test sequences including Binary, Polyphase and Clipped sequences have been proposed. The response of these sequences is processed using digital matched and inverse filters both in the presence and absence of system and measurement noise. Analysis of the processed results show that under low noise conditions, polyphase codes result in better performance. For higher noise levels on the other hand, binary codes are to be preferred. Clipped sequences when used with the inverse filter provide a better alternative to the poly-phase and binary sequences.

 

Paper ref #: 5042

 

Feature Selections Using Genetic Algorithm For Face Detection

 

Zalhan Mohd Zin

 

Department of Industrial Automation, Universiti Kuala Lumpur-MFI, Malaysia.

Email: zalhan@mfi.unikl.edu.my

 

Abstract - In 2001, Viola and Jones presented in their paper entitled “Robust Real Time Object Detection”, a method for real-time object detection in images using boosted cascades of simple haar-based features. Treptow and Zell in their paper “Combining Adaboost and Evolutionary Algorithm to Select Features for real time object detection” introduced the combination of Adaboost Algorithm and Evolutionary Algorithm in single stage which provides better classifiers in term of performance with less training time in 2004. In this paper I propose to use Genetic Algorithm (GA) inside the Adaboost framework to select features which provide similar or better cascade of classifiers for face detection with less training time. Eight different feature types are used in GA search compared to only five basic feature types in exhaustive search. These three additional feature types added will enrich the quality of feature solutions but higher computational time is required to train them due to larger search space. To reduce the training time, I propose to use evolutionary search with the characteristic of GA instead of using exhaustive search to select good features during training process of 15 stages cascade of classifiers. The GA is applied inside the Adaboost framework specifically in feature searching and selection step. By implementing GA, cascade of classifiers which consists of selected features sets can be built in lesser time. Implementation of the proposed GA method to select features in cascade of classifiers is done by using Intel OpenCV software. Experiments on set of images from BioID face database show that by using GA to search on larger number of feature types and sets we are able to find cascade of classifiers for a face detector that have similar or better detection rates, false positive rate and less training time taken.

 

Paper ref #: 5043

 

Gender classification Using Support Vector Machines

 

Winda Astuti1, Akhmad Unggul P.2, Momoh Jimoh E-Salami3 and Wahyudi4

 

1 & 2 Department of Electrical and Computer Engineering

3 & 4 Department of Mechatronics Engineering

International Islamic University Malaysia

PO BOX 10, 5729, Kuala Lumpur, Malaysia.

Email: 1 winda1977@yahoo.com, 2 unggul@iiu.edu.my,

3 momoh@iiu.edu.my, 4 wahyudi@iiu.edu.my

 

Abstract - Voice-based speaker identification is becoming important for use in many critical security systems such as access control in industry and financial transaction. A novel approach for identifying speakers’ gender, through their voice, using Support Vector Machines (SVM) is present in this paper. This is a text-dependent procedure which requires the speaker to utter a “specific word” whose Perceptual Linear Prediction (PLP) is extracted and later used as input to the SVM-based identifier. The voice parameters are compared and classified so as to classify the gender of the speaker. The result of simulation shows that this technique produces excellent performance and less training time. Same important properties of SVM, such as the relation between the number of support vectors and classification accuracy, are discussed.

 

Paper ref #: 5044

 

Paper ref #: 5045

 

Fractal Analysis Applied on Reservoir Fluids for Hydrocarbon Detection

 

Maryam Mahsal Khan and Ahmad Fadzil M Hani

 

Universiti Teknologi PETRONAS, 31750 Tronoh, Perak, Malaysia.

 

Abstract - Oil and gas is at present the most important energy fuel in the world. Consumption of these resources has reached huge dimensions forcing companies to explore more and more potential sources of hydrocarbon reservoir. Exploring hydrocarbon is also a risky and costly business. Various exploration tools are used (such as seismic imaging, magnetotelluric, TEM, well logs) that exploit the elastic, electromagnetic and electrical properties of the geological formations to determine and quantify the various fluids within the formations. In addition, other features like stratigraphy and structural formations are extracted from the seismic data. 

            These above established approaches are typically used in combination for predicting fluids hence can be tedious and may not be consistently accurate. Newer and direct ways of detecting reservoir fluids are being investigated.  New techniques have been proposed such as, singularity spectrum for differentiating reservoir fluids and fractal analysis for predicting reservoir fluid flow.

            This paper describes a direct hydrocarbon detection method that applies multifractal analysis on seismic traces for delineation of reservoir fluids.  Here, the singularity spectrum is determined directly using wavelets and their modulus maxima, to extract all relevant fractal properties from a seismic trace.  In this preliminary work, seismic models with traces of different reservoir fluids namely oil, gas, water, oilwater, gaswater and gasoil are generated. The seismic models are then analysed by computing the singularity spectrum and generalized dimensions (hausdorff, correlation and information).

            Preliminary results obtained by applying the fractal analysis tool shows that it is able to delineate the reservoir fluids effectively. The sets of holder exponents show that each reservoir exhibits a unique value of singularity strength. The fractal dimension also differentiates the reservoir fluids by having a distinct set of values for each fluid. Moreover the singularity spectrum of different reservoir fluids also exhibit different spectrum where the width of the spectrum indicates the complexity of the seismic trace.

            The fractal technique is benchmarked against synthetic seismograms generated from well logs.  Results obtained shows the technique is able to delineate reservoir fluids effectively.  Effective delineation is an important step that will improve performance of oil and gas industry by reducing drilling uncertainties, accurate 2D seismic volume development and will help reservoir engineers to make a better decision concerning production plans.

 

Paper ref #: 5046

 

Design Of A Real Time Low Bit Rate Single Bit Codec Using TMS 320C6416 DSP Processor

 

Thong ChungMun

 

Motorola Malaysia

 

Abstract - The paper describes design of a real time low-bit rate single bit codec for generating toll quality speech. Multirate DSP technique is applied to reduce the bit rate of the single bit codec output, where the initial sampling rate is to be kept high in order to achieve the desired correlation between consecutive samples. The performance of the single bit codec is evaluated by building an embedded system. The system is being built by using Texas Instruments TMS320C6416 Digital Signal Processor. The DSP, which is operating at 600MHz, 32-bits, CODEC which allow transmitting and receiving analog signals and the 256M SDRAM & 512kByte flash memory are used for the embedded system design. An embedded system for standard PCM also built to compare the single bit codec. Results of the comparison show that the proposed system is attractive in that it provides low bit rate toll quality speech using a much simpler and robust circuitry as compared to the standard PCM system.   

 

Paper ref #: 5047

 

Performance Comparison of Higher order statistics (HOS) based Blind Deconvolution Techniques with Phase Blind Deconvolution Techniques

 

M. Shahzad Younis1 and Ahmad Fadzil M. Hani2

 

Department of Electrical & Electronics Engineering,

Universiti Teknologi PETRONAS, Malaysia.

Email: 1 Shahzad_engg@yahoo.com, 2 fadzmo@petronas.com.my

 

Abstract - In this paper, we compare the use of blind deconvolution technique assuming non-min phase character of system transfer function to the deconvolution techniques based on second order of statistics assuming minimum phase assumption. Most of Deconvolution techniques used to compress the system response (ideally to be an impulse), removing the noisy effects in data and thus improving the SNR and minimizing the inter symbol interference (ISI). Techniques based on second order statistics assumes minimum phase behavior of the system unless specified otherwise. These techniques works well in retrieving the magnitude of the system only (phase blind). Preserving phase information is important for improving the SNR, adjusting the velocity shift behavior particularly in seismic. In this work we investigate phase blind algorithms for non minimum systems and non-Gaussian input signal and compare the performance with the algorithms which assumes the non-Gaussianity of input and non minimum phase character of the system. Inability of phase blind algorithms is shown by implementing different algorithms based on second order of statistics and results are compared to the algorithms based on higher order of statistics. Simulation results have shown that use of higher order of statistics, above than two not only suppress the Gaussian, colored Gaussian noise to a desired level because of inherent property of HOS, but also very successful in overcoming problem of intersymbol interference (ISI) thus improves the SNR of signal. This work is done to emphasize the use of deconvolution technique based on higher order of statistics over the phase blind techniques, where training signal is not available or expensive to transmit, channel is Quasi-stationary, input data is non-Gaussian and system is not minimum phase.

 

Paper ref #: 5048

 

HOS based Blind deconvolution technique for equalization of Constrained length Seismogram

 

M. Shahzad Younis1 and Ahmad Fadzil M. Hani2

 

Department of Electrical & Electronics Engineering,

Universiti Teknologi PETRONAS, Malaysia.

Email: 1 Shahzad_engg@yahoo.com, 2 fadzmo@petronas.com.my

 

Abstract - The focus of this paper is centered on equalization of seismic data by exploiting the higher order of statistics. Seismic equalization consists in recovering the reflectivity from a given seismic data or at least boosts its high frequencies content attenuated by the bandpass wavelet along with suppressing the high frequency noise contents. Equalization of seismic data is done to compress the system response (ideally impulse) to get back the reflectivity of earth. Higher order statistics (HOS) Blind techniques are proven, able to solve equalization problems because they can suppress the Gaussian, color Gaussian and uniformly distributed noise while allowing the non-Gaussian or Laplacian character of information. Some of the new proposed algorithms are based on stochastic gradient methods, but it is considered that these algorithms shows quite slow convergence speed and require large sample count to converge as compared the subspace based algorithms, which converge faster even for constrained data length. In this work we apply a blind equalization technique based on the slices of HOS of the seismic data using the subspace Eigenvector approach. By combining the several HOS slices of data avoids the possibility of non existence of solution, and then Eigen value decomposition constrained by Shalvi/Weinstein’s maximum kurtosis criteria  generates the unique solution by maximization of cross kurtosis of output data with a reference system. A new approach provides a close form solutions to finite impulse response system equalization.  It will be shown that the equalizer coefficients can be uniquely derived from the eigenvectors of a specific fourth-order cumulant matrix of the received signal. Computer simulations show the eigenvector solution is nearly close to the ideal (MSE) mean square error solution.

 

Paper ref #: 5049

 

Iris-Based Verification System By Using Artificial Neural Network

 

Hasimah1, Wahyudi2 and Momoh. J. E. Salami3

 

Department of Mechatronics Engineering, International Islamic University Malaysia,

Jalan Gombak, 53100. Kuala Lumpur, Malaysia.

Email: 1csimah2002@yahoo.com, 2wahyudi@iiu.edu.my,  3momoh@iiu.edu.my

 

Abstract - Nowadays the use of biometric based verification system for security system such as building access control, computer access control and database access control is highly demanding in current networked society.  In comparison with conventional security system, the biometric system based security system has advantages such is it can not be stolen, lost, forgotten and guessed. Due to these advantages, this paper describes the use of iris, which is one of the biometric systems, for verifying and identifying a people.  In order to identify the authorized and unauthorized people based on feature extracted from the people iris, artificial neural network (ANN) is used as pattern matching method. The effectiveness of the proposed method is evaluated based upon False Rejection Rate (FRR) and False Acceptance Rate (FAR) by using CASIA iris database. The results confirm that  the proposed method produces good performance.

 

Paper ref #: 5050

 

Design of Lifting Based 2-D Discrete Wavelet Transform (DWT) in VHDL

 

M.S. Bhuyan1, Md. Shabiul Islam1, Md. Azrul Hasni Madesa1, Masuri Othman2

 

1 Faculty of Engineering Multimedia University,

63100 Cyberjaya, Selangor, Malaysia.

 

2 Dept. of Electrical Engineering, University Kebangsaan Malaysia,

43600 UKM, Bangi, Selangor, Malaysia.

Email: shakir_dhaka@yahoo.com

 

Abstract - This paper describes the design flow of the lifting based 2-D DWT algorithm for Joint Photographic Experts Group (JPEG) 2000 standard. In order to build high quality image of JPEG 2000 codec, an effective 2-D DWT algorithm has been performed on the image input files to get the reconstruct image for the JPEG 2000 standard. The lifting scheme reduces the number of operations execution steps involved in computing a DWT to almost one-half of those needed with a conventional convolutional approach. In addition, the lifting scheme is amenable to “in-place” computation, so that the DWT can be implemented in low memory systems. The proposed lifting based 2-D DWT algorithm has been developed using Matlab platform which followed by wavelet transform concepts. The developed codes are then translated to behavioral level of DWT algorithm in VHDL using both tools of Quartus II from Altera and ModelSim from Mentor Graphics respectively. Comparison of simulation results between Matlab and VHDL was done to verify the functional correctness of the developed algorithm for designing hardware modules of 2-D DWT processor architecture. The motivation in designing the hardware modules of the DWT was to reduce its complexity, enhance its performance and to make it suitable development on a reconfigurable FPGA based platform for VLSI implementation.

 

Paper ref #: 5051

 

Biometric Personal Authentication Based on Handwritten Signature

 

Shohel Sayeed1, Nidal S. Kamel2 and Rosli Besar3

 

1 Faculty of Information Science and Technology, Multimedia University,

Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia.

 

2 Department  of Electrical & Electronic Engineering, Universiti Teknologi PETRONAS,

Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia.

 

3 Faculty of Engineering and Technology, Multimedia University,

Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia.

   

Abstract - The paper presents a critical analysis of the measurement of accuracy and performance of signature identification and forgery detection. To evaluate the effectiveness of the handwritten signature authentication and verification, we used two types of data: glove-based data using data glove and its corresponding signature images. Data glove is a new dimension in the field of virtual reality environments, initially designed to satisfy the stringent requirements of modern motion capture and animation professionals. In this paper we try to shift the implementation of data glove from motion animation towards signature verification problem, making use of the offered multiple degrees of freedom for each finger and for the hand as well.  The proposed technique is based on the Singular Value Decomposition (SVD) in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, and thus account for most of the variation in the original data, so the effective dimensionality of the data can be reduced. Having identified data glove signature through its r-th principal subspace, the authenticity is then can be obtained by finding the Euclidean distances. The SVD-signature verification technique using data glove is tested with large number of authentic and forgery signatures and shows remarkable level of accuracy in finding the similarities between genuine samples as well as the differences between genuine-forgery trials compare to signature images.

 

Index terms - real-time, signature verification, singular value decomposition, data glove

 

Paper ref #: 5052

 

Computer Aided Diagnosis in Detecting Cardiovascular Diseases by Using Heart Sounds

 

Panteha Eftekhar and Rosli Besar

 

Faculty of Engineering and Technology,

Multimedia University, Melaka, Malaysia.

 

Abstract - Cardiac auscultation has traditionally been taught as if it were an intellectual skill, with a didactic lecture followed by a brief demonstration of heart sounds. This approach has yielded disappointing results, with most clinicians able to recognize only about 40% of abnormal heart sounds. With the advent of electronic stethoscopes such as phonocardiogram it is now possible to conveniently record heart sounds. In this work, a heart sound segmentation and feature extraction algorithm which separates the heart sound signal into four types: Normal, Systolic murmurs, Diastolic murmurs, and continuous murmurs is provided. The algorithm uses discrete wavelet decomposition and reconstruction to produce approximations and details of the original Phonocardiography signal. Classification of the features is then performed using back propagation neural network with adaptive learning rate. Results show that the proposed algorithm approach provides a robust classification of all kinds of murmurs to make accurate diagnosis.

 

Paper ref #: 5053

 

An Overview of Different Wavelet Transform Methods for ECG Signal Compression

 

Rizwan Javaid1 and Rosli Besar2

 

1 Faculty of Information Science and Technology

2 Faculty of Engineering and Technology

Multimedia University, 75450 Melaka, Malaysia.

Email: 1 rizwan.javaid@mmu.edu.my, 2 rosli@mmu.edu.my

 

Abstract - Electrocardiogram compression is playing an interesting role in biomedical application. The purpose of data compression is to detect and remove the redundant information from the ECG signals. There are many algorithms for ECG signal compression. However we have selected the most appropriate algorithm, which is wavelet transform, and is suitable for ECG signals. Wavelet transforms are very powerful tools for signal and image compression.  This paper is representing comparative study on different wavelet compression algorithms. This comparative study examines the wavelet analysis in the study of ECG signal. This paper will evaluate the performance and efficiency on different wavelet compression algorithms based on different parameters. The percent root mean square difference (PRD) and compression ratio (CR) have been chosen to analyze the different wavelet algorithms for ECG signals. Comparative study shows that in wavelet family of algorithms bior3.9 achieved lowest rate of PRD value.

 

Index terms - ECG compression, Wavelet Transform, SPIHT, percent root mean square difference.

 

Paper ref #: 5054

 

A DWT Approach to ECG Features Extraction for PVC Detection

 

Nur Asyiqin Amir Hamzah1 and Rosli Besar2

 

1 Centre of Affiliate and Diploma Program

Multimedia University, 75450 Melaka, Malaysia.

Email: asyiqin.hamzah@mmu.edu.my

 

2 Faculty of Engineering and Technology

Multimedia University, 75450 Melaka, Malaysia.

Email: rosli@mmu.edu.my

 

Abstract? SIGNAL PROCESSING TECHNIQUES

 

Paper ref #: 5055

 

Intelligent Vehicle Fault Diagnosing System Using Neural Networks

 

Paulraj M. P., Sazali Yaacob and Nor Shaifudin Abd Hamid

 

School of Mechatronic Engineering,

Kolej Universiti Kejuruteraan Utara Malaysia (KUKUM),

02600 Jejawi, Perlis, Malaysia.

 

Abstract - Diagnosis has become very complex and critical task in determining the condition of vehicle engine. Sound emitted by the engine is always considered to be an annoying noise but in the detail analysis of the sound signal shows that noise may vary with different fault conditions. So in order to clearly diagnose the error in the vehicle engine the real time data are collected from various vehicle engines. Simple methods are proposed for recording the vehicle engine sound signal emanated using microphones. A simple system is proposed to extract the features from the noise. The features are then associated to the expert’s opinion to formulate an embedded neural network model that can identify the faults automatically.

            The noise emanated from the engine is recorded (in wav file format) using 01dB Symphony at a sampling frequency of 51200 Hz. The discrete wave samples are then segmented into frames of size 4096 with a overlapping frame size of 12.5%. As the low frequency component of the vehicle engine indicates the fault, the high frequency components of each frame are filtered using a hamming window of size 4096. Fast Fourier transform is then applied to the filtered frame signals and the magnitude variation with respect to frequency is recorded. The above procedure is repeated for all the frames and the mean magnitude variation with respect to the frequency is obtained. The auto regression model coefficients (of order 13) for the magnitude variation with respect to frequency are obtained and form the feature representing the fault present in the vehicle engine. A data base of noise related vehicle engine faults are obtained from a panel of experts.  Fifty Perodua Kancil (600) Model cars are used to obtain the data base. 

            The auto regressive coefficients and the faults present in the engine are used as a training pair to model a neural network. Two simple neural network models are considered in this paper to diagnose the vehicle engine faults. Both the neural network models consist of 12 input neurons, two hidden layers each with 15 hidden neurons and an output layer consisting of 3 output neurons. The hidden neurons are activated by bipolar sigmoid activation function and the output neurons activated by sigmoid activation function. Both the networks are trained by Levenberg-Marquardt method. The first network is used to identify the filter condition, Gasket condition and the engine oil status. The second network model is used to detect the rpm fault, carburetor cleaning and the timing problem.

            The neural networks are trained with 36 training pairs, tested with 50 data and the results are tabulated. The network is able to classify the faults to an accuracy of 96%.

 

Paper ref #: 5056

 

Speech Recognition Application Based On Malaysian Spoken Vowels Using Autoregressive Model Of The Vocal Tract

 

Shahrul Azmi bin Mohd Yusof, Paul Raj M. P. and Sazali Yaacob

 

School of Mechatronics Engineering, Kolej Universiti Kejuruteraan Utara Malaysia

01000 Kangar, Perlis, Malaysia.

 

Abstract - Automatic speech recognition (ASR) has made great strides with the development of digital signal processing hardware and software especially using English as the language of choice. But despite of all these advances, machines can not match the performance of their human counterparts in terms of accuracy and speed, especially in case of speaker independent speech recognition.  Speech recognition system can be used as a verbal command system, speaker identification system or even a voice synthesizer warning system. Today, significant portion of speech recognition research is focused on speaker independent speech recognition problem using English with limited vocabulary. In this paper we present a feature extraction method to identify vowels recorded from Malaysian speakers and its application to recognized predetermined Malay words. This study was done using data obtained from the Malays, Chinese and Indian speakers. Any spoken vowels will be recognized based on the selected parameters identified from the Autoregression (AR) Model of the vocal tract. Using this method, words can be detected based on their unique sequence of vowels. This paper also presents two vowels extracting method based on the parameters of the original waveform itself and the waveform energy.  The start point and endpoint of the utterances are determined using the average noise level and 2 threshold values based on duration and amplitude of the waveform. The AR Model parameters of the extracted utterances is then calculated and classified into its respective vowels group. The sequence of these vowels determines the words which are spoken.

 

Paper ref #: 5057

 

Application Of Feedforward Neural Network To Classification Of Pathological Voices

 

Sazali Yaacob, Paulraj M. Pandian, M.Rizon and M.Hariharan

 

School of Mechatronic Engineering,

Kolej Universiti Kejuruteraan Utara Malaysia (KUKUM),

02600 Jejawi,Perlis Malaysia.

 

Abstract - This work presents the development of Pathological Voice Detection System (PVDS) based on the acoustic analysis and EGG features using Artificial Neural Network (ANN). Acoustic analysis is a non-invasive technique based on digital processing of the speech signal. In the evolution of quality of speech, acoustic analyses of normal and pathological voices have become increasingly interesting to researches in laryngology. Most of the vocal and voice diseases cause changes in the voice. Electroglottography is a method of obtaining vibration signal related to the laryngeal phonatory function. The electroglottograph (EGG) is an instrument that registers the contact between the vocal folds as a time-varying signal. The time domain voice parameters are computed from the extracted pitch data. In this paper, a Feed forward Neural Network is employed for the classification of pathological voices. The features extracted from the EGG signal can be examined to depict the aspects of normal or abnormal vocal fold vibration motion. The acoustic parameters extracted from the speech signal and the features from the electroglottography form the input to the neural network and network distinguish the voice as pathological or a non-pathological voice. Simple algorithms are also suggested for better classification accuracy.

 

Index terms - acoustic voice analysis, electroglottograph, feature extraction, artificial neural network.

 

Paper ref #: 5058

 

Neural Networks for Classroom Speech Intelligibility Modeling

 

Paulraj M. P., Sazali Yaacob, Ahmad Nazri and M. Thagirarani

 

Acoustic Application Research Group,

Kolej Universiti Kejuruteraan Utara Malaysia (KUKUM),

02600 Jejawi, Perlis, Malaysia.

 

Abstract - This paper investigates the effect of Reverberation Time (RT) and Signal to Noise Ratio (SNR) on Speech Transmission Index (STI) in University classrooms. STI refers to the accuracy with which a normal listener can understand a spoken word or phrase. In a classroom, a teacher talks to a group of students who are intended to hear everything the teacher says. The achievements and behavior of the students inside the classroom is mainly influenced by the STI factor. For achieving the highest possible STI, the acoustical design of classrooms should be based on all the listeners in the classroom. The STI level of a room depends on the room volume, source receiver distance, back ground noise level, RT, SNR, pitch, and sharpness level of the speech signal. A set of Malay words with CVC format is complied and a speech signal data base is created. In a classroom, the speech signal is presented at a level of 65 dB and the noise at levels of 71, 65, 59, 53, and 47 dB are electrically mixed to yield a signal-to-noise ratio of -6, 0, +6, +12, and +18 dB.  The sound pressure levels are then measured at different classroom positions. From the measured sound pressure level, the speech transmission index at various listeners’ positions is determined and a simple neural network model is developed to predict the speech transmission index at various listeners’ positions of a classroom for various speech levels.

 

Paper ref #: 5059

 

Stability Analysis of A Neural Network System

 

Paulraj M. P., Sazali Yaacob and Zaridah Mat Zain

 

School of Mechatronic Engineering,

Kolej Universiti Kejuruteraan Utara Malaysia,

02600 Jejawi, Perlis, Malaysia.

 

Abstract - Stability is the most important property of all kinds of systems. To analyze the stability of a discrete system, numerous algebraic and graphical methods are available. Lyapunov’s method has been extensively used to analyze the stability of a nonlinear system.  There have been many studies on applying neural networks in adaptive control. Nguyen and Widrow have designed a neural network controller for backing up a computer simulated truck-trailer. Using linear differential inclusion Tanaka analyzed the stability of a certain class of neural network systems adopting Nguyen and Widrow Model. In this paper, certain simple schemes are developed to formulate the main matrices, sub matrices and vertex matrices that are used to represent the neural network system. Extending the Lyapunov’s stability theorem, ascertain the stability of the neural network system using the vertex matrices; a heuristic approach is also proposed to for the construction of a positive definite matrix used in stability testing. A New interpretation is deduced for the choice of slope parameter of the activation function, which simplifies the stability aspect of the network. Further, simple and sufficient conditions for inferring the unstable situation are proposed; these conditions are easily applicable without having computational overheads. Illustrations are provided to ascertain the various proposed procedures.

 

Paper ref #: 5060

 

Design And Development Of A Visualization System For Object Motion Detection

 

Suliana Sulaiman, Nooritawati Md Tahir, Alex Wenda, Salina Abd Samad and Aini Hussain

 

Signal Processing Research Group, Dept. of Electrical, Electronic & Systems,

Faculty of Engineering, Universiti Kebangsaan Malaysia

43600 UKM Bangi, Selangor, Malaysia.

Email: suliana@vlsi.eng.ukm.my, suliana86602@yahoo.com

 

Abstract - A very fundamental and critical task in any object tracking and monitoring system is to identify moving objects. The need of such a reliable vision system for object motion detection and classification is very essential. In this paper, we present a graphical user interface (GUI) which comprise of a visualized system that can be used to detect and classify moving objects from video recorded scenes. Several important aspects in software engineering design such as usability, interactivity, and simplicity are taken into consideration. Basically, the system implements the object recognition algorithm which is based on background subtraction and modeling technique. Prior to that, several other image pre-processing algorithm such as morphology processes and edge detection method were used in order to extract foreground pixels from its background. Results obtained thus far shows that the recognition capability of the developed system is considered satisfactory with 85% accuracy. As such, this paper will be organized as followed: first, the software design issues will be discussed and then it will be followed by the explanation of the algorithms involved in the development.

 

Index term - video processing, software design.

 

Paper ref #: 5061

 

A Development Of Fan-Beam Optical Tomography For Solid Flow Monitoring

 

Mazidah Tajjudin1, Wong Jenn Woei2 and Ruzairi Abdul Rahim3

 

1 Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Malaysia.

Email: mazidah@salam.uitm.edu.my

 

2 Faculty of Engineering, Universiti Industri Selangor,

70000 Berjuntai Bestari, Malaysia.

 

3 Faculty of Electrical Engineering, Universiti Teknologi Malaysia,

81300 Skudai, Malaysia.

 

Abstract - The project describes a development of a fan beam optical tomography which applied infra red sources as the sensing element.  Numerous optical sources had been utilized based on the fact that it is the simplest method available up till now.  The importance of identifying the sensor characteristics is crucial to make sure that it is the optimum instrument to be applied to our system. This project laid some important criteria that need to be considered upon applying infrared tomography system. The system was developed using four pairs of infra red sensors to interrogate a solid flow inside a pipe with a diameter of 50 mm. The image reconstruction algorithm was developed to process the data based on the Linear Back Projection algorithm. Several experiments had been carried out to evaluate the results obtained. The results were then analyzed individually and concluded at the end of the report. Some suggestions were drawn which can be considered to improvise the system performance.

 

Paper ref #: 5062

 

Investigation Of Horizontally Arranged Five-Element Array Of Half-Cylindrical DRAs And Horizontally Arranged Five-Element Array Of Bow-Tie Dipole Antennas

 

Hizamel Mohd Hizan1 and Aziati Husna Awang2

 

1 Telekom Malaysia Bhd., Malaysia.

 

2 Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Malaysia.

 

Abstract - The aim of this paper is to investigate the bandwidth and effects of mutual coupling between horizontally arranged five-element arrays of half cylindrical DRAs and Horizontally arranged Five-Element Arrays of Bow Tie Dipole Antenna at resonance frequency of 5.9099GHz. The design of will be investigated using powerful electromagnetic fields simulation software called CST MICROWAVE STUDIOTM. A single half cylindrical DRA was designed and simulated using Microwave Studio. The results were verified with the simulation result using FDTD. In Microwave Studio, to perform the field simulation it utilises Finite Integration Method (FI-Method) with Perfect Boundary Approximation™ (PBA). Results obtained from Microwave Studio simulation were compared with Finite Difference Time Domain (FDTD) simulation. A good agreement between the two simulated results was obtained. The performance comparisons were focused on near and far field distribution and the bandwidth. From single cylindrical DRA, the structure was extended into five-element arrays.   The single bow-tie antenna was design and simulated to carry out the performance comparison with DRA antenna. From the single bow-tie, the structure was extended into horizontally arranged five-element array of bow-tie antenna. The resonant frequencies and return loss values (in dB) of both structures were already aligned; therefore, comparisons can be performed. The result found that the measured – 10 dB bandwidth of the horizontally arranged five-element array of half-cylindrical DRAs in TE01d mode has broader bandwidth of operation than the horizontally arranged five-element array of bow-tie dipole antennas.  The five-element array of half-cylindrical DRAs in TE01d mode, on average, can produce about 340.6 MHz or 5.69 %.  Conversely, the horizontally arranged five-element array of bow-tie dipole antennas, on average, can generate 259.6 MHz or 4.33 %.  From the simulated results, we can come to the conclusion that the mutual coupling between DRAs in TE01d mode, for horizontally arranged arrays, is by and large greater than the level of mutual coupling between bow-tie dipole antennas. 

 

Paper ref #: 5063

 

Variations of Signal Strength in Wireless Indoor Environment

 

Md. Shahidul Islam and  Rosli Besar

 

Faculty of Engineering & Technology, Multimedia University,

75450 Ayer Keroh, Melaka, Malaysia.

Email: shahidul.islam@mmu.edu.my

 

Abstract - Propagation characteristics are changed significantly for the wireless communication due to the channel impairments. In this paper we have investigated the variations of received signal strength for the indoor wireless environment in a typical laboratory. We have conducted our study in a laboratory whose dimension is about 7m X 15mX4m.  The lab is located on the 4th floor of Faculty of Engineering & Technology building, Multimedia University, Malacca campus, Malaysia. A GP300  is utilized as a transmitter operated at 477.025 MHz and a CSM2945A is used to receive the signal. Different obstacles are located in between the transmitter and receiver. Our investigation  shows that received signal strength changes with different channel conditions   as well as the location of transmitter and receiver in the laboratory. Orientation of the obstruction also has shown some variations of received signal.

 

Index terms - power measurement, fluctuations, wireless communication, reflection, diffraction, obstruction.

 

Paper ref #: 5064

 

Development Of Isolated Digits Recognition System For Northern Malay Peninsular Dialects

 

Hazizulden Abdul Aziz, Mohd Nasir Taib and Shah Rizam Mohd Shah Baki

 

Advance Signal Processing Research Group,

Faculty of Electrical Engineering, Universiti Teknologi Mara (UiTM)

40450 Shah Alam, Selangor, Malaysia.

 

Abstract - This paper describe the derivation of MFCC feature vectors for isolated Bahasa Melayu (Malay language) digits uttered in Northern Malay Peninsular dialect. The set of feature vectors is used in the development of a neural network based ASR for dialectal dependent isolated Bahasa Melayu digit recognition. Initial result shows an encouraging performance for independent isolated digit recognition system in noisy environment.

 

Paper ref #: 5065

 

Enhancing Data Embedding By Using Smooth Images

 

Akram M. Zeki and Azizah A. Manaf

 

Faculty of Computer Science and Information System

University Technology Malaysia

Email: akramzeki@yahoo.com

 

Abstract - Data embedding is a technique that enables us to secretly embed extra data into digital cover images. Digital watermarking is an application of data embedding, which is aimed at copyright protection. The most important requirement of watermarking system is the robustness with respect to image distortions. This means that the watermark should be readable from images that underwent common image processing operations. While many publications consider the relation between smoothness and quality of watermarked images, few studies only concentrate on the relation between smoothness and watermarking robustness.

            In this study, the cover image, which is grey scale has been partitioned into blocks (3 x 3 pixels), each block has been classified as smooth or texture block by calculating the difference between maximum pixel value and minimum pixel value.

            Lossy image compression schemes consider one of the most famous watermarking attacks work by removing redundancy from the data. Lossy image compression has been applied to the watermarked image after embedding the data, in order to study the robust watermarking aspect for smooth and texture areas. The robustness aspect has been measured by comparing the extracted watermark data with the original one. The initial result shows that the extracted data from smooth blocks is better than the extracted data from texture blocks. The smooth areas will be used in this paper for embedding important watermark data, while unimportant data will be embedded within the texture blocks.

 

Paper ref #: 5066

 

Paper ref #: 5067 fullpaper

 

Model order criterions and model order selection based on data from the essential oil extraction system

 

Mohd Hezri Fazalul Rahiman, Mohd Nasir Taib and Yusof Md Salleh

 

Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Malaysia.

Email: hezrif@ieee.org

 

Abstract - The paper discussed the comparisons between model order criterions for the purpose of parameter estimation. Comparisons are based on the steam temperature data collected from essential oil extraction system. All together, there are nine datasets collected from the process with different settings of pseudorandom binary sequence (PRBS) perturbation. Four model order criterions will be evaluated such as normalized sum of squared errors (NSSE), Akaike’s information criterion (AIC), Rissanen’s minimum description length (MDL) and Akaike’s final prediction error (FPE). Criterion with the best parameter complexity penalization will be chosen as the model order criterion.

 

Index terms - model order criterion; parameter complexity penalization; NSSE; AIC; MDL; FPE; steam temperature; essential oil extraction system.

 

Paper ref #: 5068

 

ARX, ARMAX and NNARX modeling for essential oil extraction system

 

Mohd Hezri Fazalul Rahiman, Mohd Nasir Taib and Yusof Md Salleh

 

Faculty of Electrical Engineering, Universiti Teknologi MARA,

40450 Shah Alam, Malaysia.

Email: hezrif@ieee.org

 

Abstract - In this paper, an essential oil extraction system with a refilling line is modeled based on ARX model structure and its family, such as ARMAX and NNARX. ARX and ARMAX will be modeled using linear black-box technique and NNARX will be modeled using non-linear MLP neural network. LMA has been selected as the training algorithm. Optimizations of model order, hidden unit and iteration are also discussed. Each model structure will be estimated and validated using a fresh set of data. Comparisons between them is discussed and concluded.

 

Index term - autoregressive with exogenous input (ARX); autoregressive moving average with exogenous input (ARMAX); neural network autoregressive with exogenous input (NNARX); multilayer perceptron (MLP); Levenberg-Marquardt algorithm (LMA); essential oil extraction system.

 

Paper ref #: 5069

 

Paper ref #: 5070

 

Paper ref #: 8010

 

JPEG2000 implementation on Multimedia Platform

 

Kanike Nagaraju

 

Email: raju_kanike@yahoo.co.in

 

 

This paper reviews the implementation of JPEG2000 Codec in existing multimedia platforms and suggests improvements required for the same. Due to low power and low cost requirements of the market, the low operating frequencies and the low internal memory availability are the characteristic of present multimedia processors.  This is a major challenge for implementing the complex JPEG2000 standard. DWT and EBCOT (Embedded Block Coding with Optimized Truncation) are the two core algorithms in JPEG2000. DWT, quality scalability, independent embedded blocks can obtain superior low bit-rate and layered bit stream can be obtained by EBCOT. The fundamental building blocks of a typical JPEG2000 codec include preprocessing; DWT, quantization, EBCOT and Rate distortion control. General multimedia platforms contain RISC processor as main controller and DSP processor for major algorithmic processing. This paper explains the implementation procedure of JPEG2000 encoder and decoder on a reference platform with detailed data flow. The memory transfer related issues, the processing time issues for each algorithm block of JPEG2000 encoder and decoder are considered. The processing time and PSNR variations with respect to the tile size, codeblock and decomposition levels are discussed.  The Maximum number of passes for the each component in the bitplane coding is discussed.  This paper also suggests improvements in present hardware platform architecture for efficient processing of the Discrete Wavelet Transform (DWT), Bit-Plane coding and Rate allocation modules.

 

Paper ref #: 8011

 

FPGA Based System For Developing Image Processing Algorithms

 

Balkrishan Ahirwal1, Rakesh Mehta2 and Mahesh Khadtare3

 

1 & 2 MTE(I) Pvt. Ltd., India.

3 IIT, Guwahati, India.

Email: 1balkrishaniitg@hotmail.com, 2bitmapper@vsnl.net, 3maheshkha@gmail.com

 

Abstract - This paper explores the performance and architectural tradeoffs involved in the design of FPGA based real time image processing systems. The implementation includes image acquisition through infrared or CCD camera, processing using image enhancement algorithms and finally displaying the processed image according to some existing standards. In our work we have dealt the total design of the processing system with aspects such as real time, area, speed and computational simplicity in mind. It is then demonstrated that using a modular approach like that of FPGA board real time systems, can quickly find you a solution for image processing with FPGAs. It is then further demonstrated that using a modular approach like that of “VIRTEX-II PRO PCI VDO CARD” you can quickly find a solution for image processing with FPGAs. Entire modules were written and synthesized in VHDL behavioral description. A simulator was used to test the logic operation and internal timing of the circuit. The complete system is implemented using FPGA (Xilinx Virtex-II Pro XC2VP30). The hardware implementation results have been verified in real time. The performance issues for the implementations have also discussed.

 

Paper ref #: 8012

 

About Denoising Resistivity Map Of Moroccan Phosphate “Disturbances” Using  Wavelet Transform

 

Saad Bakkali and Mahacine Amrani

 

Faculty of Sciences and Techniques, University Abdelmalek Essaâdi, Tangier, Morocco.

Email: saad.bakkali@menara.ma

 

Abstract - Resistivity surveys have been successfully used in the Oulad Abdoun phosphate basin.  A Schlumberger resistivity survey over an area of 50 hectares was carried out. A new field procedure based on analytic signal response of resistivity data was tested to deal with the presence of phosphate deposit disturbances. A resistivity map was expected to allow the electrical resistivity signal to be imaged in 2D. 2D wavelet is standard tool in the interpretation of geophysical potential field data. Wavelet transform is particulary suitable in denoising, filtering and analyzing geophysical data singularities. Wavelet transform tools are applied to analysis of a moroccan phosphate deposit “disturbances”. Wavelet approach applied to modeling surface phosphate “disturbances” was found to be consistently useful.

 

Index terms - resistivity, Schlumberger, wavelet, denoising, phosphate, Morocco.

 

Paper ref #: 8013

 

Application Of Intelligent  Information Systems In Educational Processing

 

Seyed Kamal Vaezi

 

Ministry of Sciences, Research and Technology,

Islamic Republic of Iran.

Email: Vaezi_ka@yahoo.com

 

Abstract - Structurally, the use and sharing of information must be aligned with all of the factors the higher education sectors are involved. The development and application of information must be relevant and in context to the educational direction and the stakeholders needs. Optimum information application, sharing, and flow are only delivered as part of the training processes of the organization. The objectives and measures of each must be common. Most importantly, knowledge growth and use must be specifically tied to the individual and team objective setting, incentive and reward mechanisms of the organization. In this process we talk about how we can store, disseminate and use of information in term of educational    working. The issues are about how to determine the best way for approaching and acquiring intelligent information systems effectively including motivating people to share knowledge and access through the system, how to determine the good metrics for evaluating efficiency, how to determine the best way to perform a information audit, how to determine how people create, communicate and use information, and how to determine more inclusive, integrated KMS software packages with the others.

 

Index terms - information intelligent systems, educational organization, innovation cycle.

 

Paper ref #: 8014

 

Image Processing  And Protection Method In Electronic Business

 

Seyed Kamal Vaezi

 

Ministry of Science, Research and Technology,

Islamic Republic of Iran.

Email: Vaezi_ka@yahoo.com

 

 

Abstract - The purpose of this paper is to provide copyright protection for intellectual property that's in digital format. In this career we review digital watermarks an application of steganography. Digital Watermarking is a form of steganography which embeds usually imperceptible or invisible markings or labels in digital data in the form of bits. Interest in digital watermarks has grown out of an increasing interest in intellectual property and copyright protection. ‘Provide’ means of placing additional information within digital media so if copies are made, the rightful ownership may be determined. Digital watermark, a security software, are added to still images in a way that can be seen by a computer but is imperceptible to the human eye. A Digital watermark carries a message containing information about the creator or distributor of the image, or even about the image itself. Watermark contains copyright information and copying restriction information that indicates the video can be copy once, copy unlimited times, or never copy  in the following items:

- Video recording equipments manufactures have to agree with the code, and make the equipments accordingly.

Digital Watermarking - Video

- Universal Pictures.

- Insert digital watermarks in its movies including theatrical release, home video, video on demand, and broadcast movies.

 

Index terms - image processing, electronic business, protection method.

 

Paper ref #: 8015

 

Extraction of Hidden Features in Poor Resolution Images for Medical Analysis

 

Bharathwaj Nandakumar

 

Dept. of Computer Science, University of Southern California.

Email: nandakum@usc.edu

 

Abstract - In modern clinical analysis, images play an important role in the diagnosis of various medical conditions. Various diagnostic equipments may not give images in a comprehensible manner to the specialist. Hence the innate need for image processing and analysis comes into picture. The human optical system cannot pick up small differences in shades if the quantity of one shade is too widespread. In this paper we put forward a technique that is based on the relative difference between two shades not taking into account the quantity of the shades. Here we approximate the image with varying no of quantization levels. When the levels of quantization are the same for the two shades the quantization error is with respect to the same level. So the contrast between the two shades increases and is profoundly seen when it reaches an optimum quantization, in which the two different shades fall in the same level. This relative difference is almost accurate. Then we scale this difference to get an output of the image which is not only enhanced a great deal with respect to spatial clarity but also with respect to well defined edges.

 

Paper ref #: 8016

 

Thresholding & Histogram For The Identification Of Original Diamonds From Cubic Zirconia

 

R. Anitha1, K.Duraiswamy, K.Rajendran2 and K.Balanagagurunathan3

 

K.S.Rangasamy College of Technology, Tiruchengode, Tamilnadu, India.

Email: 1aniraniraj@rediffmail.com, 2vinsat2005@sify.com, 3balanagagurunathan@yahoo.com

 

Abstract - Digital image processing has a very wide range of application areas. This paper brings out the significance of digital image processing in the identification of original diamonds from Cubic Zirconia (CZ). The most commonly utilized Diamond simulant is CZ. However, the optical properties of Diamond and CZ are different. One of the reasons for the beauty of Diamonds is their remarkable power of reflection. A well-proportioned Round Brilliant Cut Diamond returns all the light that enters it back through the table facet. In other words, no light at all “leaks” out of the back of the Diamond. Conversely, a Round Brilliant Cut CZ, with its lesser powers of reflection, experiences loss of light or “leakage” through the back. This loss results in diminished brilliance and beauty. Initially this loss of brilliance and light sounds negative, but it is actually a powerful ally for a person who has no expensive space age equipment. Diamonds are more brilliant than CZ. CZ have a plastic like look/appearance and because of its low refractive index compared to original diamonds, there is leakage of light, and also suffers from the ‘read through effect’. There are several ways to distinguish original diamonds from its imitations using regular Gem Tools and everyday items. This paper deals with the application of simple image processing techniques namely, thresholding and histogram in the process of identifying/differentiating original diamonds from CZ.

 

Index terms - Thresholding, Diamond, Histogram, Cubic Zirconia, Segmentation, digital image processing.

 

Paper ref #: 8017

 

An Optimization Method for Solving Mixed Integer Programming Problem

 

Vijaya K Srivastava and Atef Fahim

 

University of Ottawa, Canada.

 

Abstract - This paper presents a heuristic approach for minimizing nonlinear mixed discrete-continuous problems with nonlinear mixed discrete-continuous constraints. The approach is an extension of the boundary tracking optimization one that was developed by the authors to solve for the minimum of nonlinear pure discrete programming problems with pure discrete constraints. The efficacy of the proposed approach is demonstrated by solving a number of test problems of the same class published in recent literature. Among these examples is the complex problem of minimizing the cost of a series-parallel structure with redundancies subject to availability constraints. All test obtained so far show that the proposed approach obtains the published minimum of the respective test problem or finds a better minimum. While it is not possible to compare computation time due to the lack of data on the test problems, however for all the tests the minimum is found in reasonable amount of time.

 

Paper ref #: 8018

 

Weed Classification Using Variance For Real-Time Selective Herbicide Applications

 

Irshad Ahmad, Abdul Muhamin Naeem, Muhammad Islam and Shahid Nawaz.

 

Abstract - Information on weed distribution within the field is necessary to implement spatially variable herbicide application.  Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Variance of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.

 

Paper ref #: 8019

 

Wavelet Transform Technique for Detection of Communication Disorder in the Speech using Artificial Neural Network

 

Mahesh T. Kolte1 and D. S. Chaudhari2

 

Government College of Engineering, Amravati, India.

Email: 1mtkolte@yahoo.com, 2ddsscc@yahoo.com

 

Abstract - The Wavelet Transform analyzes the temporal and spectral properties of highly variable speech signal. The wavelet transform decompose the speech signal for compression of the data reducing the computations. Artificial neural network is used to classify the communication disorders in speech signal. The samples of speech signals with and without communication disorder are recorded. The speech signal is decomposed by wavelet transformation into the coefficients stored as databank. The neural network is determined by the coefficients and trained by using Backpropagation algorithm to detect the communication disorder very effectively. This paper emphasizes on the wavelet transform decomposition technique to detect the communication disorder in the speech signals using artificial neural network.

 

Paper ref #: 8020

 

Partitioning and Scheduling in HW/SW Codesign, Exploiting a Hybrid Heuristic Approach

 

Maryam Zomorodi Moghadam and Ahmad Kardan

 

Computer Engineering & IT Dept.,

Amirkabir University of Technology (Tehran Polytechnic), Iran.

Email: zomorodi@ce.aut.ac.ir

 

Abstract - HW/SW partitioning is one of the key problems in Codesign systems. In this paper, we propose a new partitioning method based on integrating simulated annealing, taboo search and genetic algorithms. The objective of this hybrid method is to overcome the drawbacks of each algorithm and to use the best features of them in hw/sw partitioning problem. Another consideration in this paper is to involve the scheduling to the partitioning phase, in order to reduce the time of the backward design cycles. Systems are considered as task graphs and the partitioning algorithm operates on them. Application parameters are obtained with synthesizing and compiling the application program code in hardware and software respectively. Results are compared with the optimal ILP algorithm. We also compare this new method with the original simulated annealing, taboo search and genetic algorithms and show the superiority of our approach in time complexity and quality of solutions. We exploit some testbenchs in the real world and from random graphs to verify our work.

 

Index terms - Hardware/Software Codesign, Hardware/Software Partitioning, Simulated

Annealing, Taboo Search, Genetic Algorithm, Scheduling.

 

Paper ref #: 8021

 

An Unsupervised Anchorperson Shot Detection based on the Distribution Properties

 

Jian Gao and Mengqi Guo

 

Institute of Automation, Shanghai University,

Shanghai 200072, China

 

Abstract - Anchorperson shot detection is the vital step to parse news video and index news information. However, most current methods of anchorperson detection mainly depend on temple matching, which is limit to diversified news programs. In this paper, we propose a model-free anchorperson detection algorithm based on anchorperson distributing traits. First, news video is divided into shots from which candidate shots is selected. Secondly cluster algorithm is applied to the candidate shots for gathering the similar shots. Finally we put the variance analysis to each cluster to detect the anchorperson shots. Experiments results show that our method has achieved excellent results in various kinds of news video.

 

Index terms – anchorperson, model-free, distributing properties, news video.

 

Paper ref #: 8022

 

A Dynamic-List Scheduling Algorithm for Temporal Partitioning onto Dynamically Reconfigurable Embedded Systems

 

Mohammad Sadegh Sadeghi, Ahmad Kardan and Hosein Pedram

 

Abstract - Dynamically reconfigurable embedded systems(DRESs) target an architecture consisting of general purpose processors and field programmable gate arrays (FPGAs), in which FPGAs can be reconfigured in run-time to achieve cost saving. With partial-reconfiguration capability, has made possible the concept of “virtual hardware” and this concept promises to be an efficient solution to save silicon area. In this paper we present a new temporal partitioning of data flow graphs for dynamically reconfigurable embedded systems. The algorithm is based on an extension of the traditional static-list scheduling to a dynamic version that consider a new cost function. The nodes to be mapped into a partition are selected based on a statically computed cost model. The cost for each node integrates dependency to current partition, the critical path length and communication effects. Mapping of nodes to partitions based on dependency of each node to current partition leads to be added more nodes with more dependency to current partition. So minimize the connection cost, increase the utilization of target reconfigurable hardware and decrease the run-time of each partition on target hardware based on decrease of data path delay. A comparison of the algorithm to other algorithms and with a static list scheduling approach is shown. The presented algorithm has been implemented and the results shown that it is robust, effective, and efficient, and when compared to other methods finds very good results in small amount of CPU time.

 

Paper ref #: 8023

 

A Contourlet-Based Edge Detection Method for Low Contrast Satellite Images Using Information Measurement

 

 

S. Mahmoudi1 and S. Kasaei2

 

1 Azad University-Science And Research Branch, Tehran, Iran.

2 Sharif University, Tehran, Iran.

 

Abstract - Feature extraction is one of the most important steps in image processing tasks. In this paper, we propose a new approach to extract dominant edges from satellite images. By using an automated threshold obtained from the information content of image in each level of contourlet decomposition, strong image edges are extracted and weak edges are discarded. As contourlet transform is better at detecting curves in more directions without influencing by discontinuities, in comparison to wavelets, it has led to extract smooth curves from images very effectively. The results and experiments of applying this algorithm on images captured by Landsat satellites show the efficiency of the proposed method.