Volume-2, Issue-3

May-June, 2016

1. Survey of Image Forgery Detection Technique based on Color Illumination using Machine Learning Approach

In ancient times, images were used very rarely & there was a possibility of less amount of forgery or no forgery in images. It requires much knowledge to create a forged image earlier. Nowadays, Images have gained a very vital importance in our daily life and it is not very difficult to make forged images because of the availability of powerful digital image editing software’s that does not require any expert knowledge. So, it becomes very easy to create a tampered image. As a result we have to prove the authenticity of an image. In this paper we have discussed about one of the most common forms of image forgery which is image splicing and other forms. We discussed about various existing forgery detection methods and techniques based on color illumination & machine learning approach which results in automatic decision making. We have also discussed about the existing work drawbacks and the possibility of future improvements. Index Terms— Forgery, Tampered Image, Image Splicing, color Illumination

Published by: K.Sharath Chandra Reddy, Tarun DalalResearch Area: Computer Science Engineering

Organisation: CBS Group of Institution, Jhajjar, HaryanaKeywords:

2. Study of Noise Reduction in Four-Cylinder Common Rail Direct Injection Diesel Engine at Idle Speed

The design and development of modern internal combustion engines is marked by a reduction in exhaust gas emissions and increase in specific power and torque. This paper aims at the study of noise reduction in 4-stroke common rail direct injection engine at idle speed. Idle speed is basically a speed of engine when vehicle is not running i.e. not in motion. Now a day, this situation often comes at red-lights, in traffic and in waiting while parked outside a business or residence etc. This paper presents a study about the effects of Fuel Injection Pressure on the combustion process.

Published by: Jaswinder Singh, Harvinder LalResearch Area: Non conventional-Engines

Organisation: R.I.E.T, PhagwaraKeywords: CRDI, Idle Speed, Fuel Injection Pressure, ECU.

3. Prediction of Heart Disease using Data Mining Techniques

Data mining is process to analyses number of data sets and then extracts the meaning of data. It helps to predict the patterns and future trends, allowing business in decision making. Data mining applications are able to give the answer of business questions which can take much time to resolve traditionally. High amount of data that can be generated for the prediction of disease is analyzed traditionally and is too complicated along with voluminous to be processed. Data mining provides methods and techniques for transformation of the data into useful information for decision making. These techniques can make process fast and take less time to predict the heart disease with more accuracy. The healthcare sector assembles enormous quantity of healthcare data which cannot be mined to uncover hidden information for effectual decision making. However, there is a plenty of hidden information in this data which is untapped and not being used appropriately for predictions. It becomes more influential in case of heart disease that is considered as the predominant reason behind death all over the world. In medical field, Data Mining provides several methods which are widely used in the medical and clinical decision support systems which should be helpful for diagnosis and predicting of various diseases. These data mining techniques can be used in heart diseases takes less time and make the process much faster for the prediction system to predict diseases with good accuracy to improve their health. In this paper we survey different papers in which one or more algorithms of data mining used for the prediction of heart disease. By Applying data mining techniques to heart disease data which requires to be processed, we can get effective results and achieve reliable performance which will help in decision making in healthcare industry. It will help the medical practitioners to diagnose the disease in less time and predict probable complications well in advance. Identify the major risk factors of Heart Disease categorizing the risk factors in an order which causes damages to the heart such as diabetes, high blood cholesterol, obesity, hyper tension, smoking, poor diet, stress, etc. Data mining techniques and functions are used to identify the level of risk factors which helps the patients to take precautions in advance to save their life.

Published by: Era Singh Kajal, Ms. NishikaResearch Area: Computer Science Engineering

Organisation: CBS Group of Institution, Jhajjar, HaryanaKeywords: Data Mining, Disease prediction, KNN, Decision Tree, SVM.

4. Heart Disease Prediction System Using PCA and SVM Classification

Heart is the most significant part of human body. In this fast and busy life people eat what they want and diagnosis themselves. As a result they get sick and it results into heart failure. Life is completely dependent on the proper working of heart. If functioning of heart is not properly worked, it will also affect the other body parts of human body such as brain, kidney, etc. Heart Diseases are the major cause of deaths in the world. Various factors that increase the risk of Heart Diseases such as stress, cholesterol, high blood pressure, lack of physical exercise, smoking and obesity etc. The heart disease prediction system helps the physician and healthcare professionals as a tool for heart disease diagnosis. To protect the life of a patient from heart diseases there have to be quick and efficient prediction technique is to be followed. The main goal of this work is to develop an efficient heart disease prediction system using feature extraction and SVM classifier that can be used to predict the occurrence of disease. The prediction of heart disease pattern with classification algorithms is proposed here. Classification is one of the most important tasks in data mining. It is very essential to find the best fit classification algorithm that has greater accuracy on classification in the case of heart disease classification. This cleaned data is classified by the classification algorithms SVM classifier. This technique is widely used to validate the accuracy of medical data.

Published by: Kiranjeet Kaur, Lalit Mann SinghResearch Area: Computer Science Engineering

Organisation: S.G.G.S.W.U, Fathegarh SahibKeywords: KDD, Heart Disease Prediction, Data Mining, Classifiers, PCA, Support Vector Machine.

5. Analysis of Temperature Variation in a Mild Steel Plate using LBM

LBM is a method used for the computations in dynamics of fluid by fluid simulation. In the present study, a square plate (Mild steel plate) of 10 mm X 10 mm dimensions has been selected. The temperature variation and heat flow at different nodes through the plate from its one end to another end would be simulated by giving heat from room temperature to 100° C in MATLAB software, C++ etc. After the simulation process, the results of temperature variation at different nodes will be shown on Tech-Plots in the form graphs b/w temperature and distance from one end of the plate to the another end of the plate.

Published by: Neeraj Kumar, Abhishek Dadwal, Ajay Kumar, Gagan Bhatt, AmitResearch Area: Fluid Mechanics

Organisation: PIT, HoshiarpurKeywords: LBM, Modelling, Simulation, Analytical approach.

6. A Review Study of Thermal Spray Coatings for Corrosive Wear

Thermal spray coating process is a surface modification technique in which a coating material likes cermets, metallic, ceramic and some other materials in form powder are feed into a torch or a gun, the powder inserted into torch will be melted by high temperature developed by torch. Coating thickness can achieve by applying multiple layer of melted coated material. This papers aims at the review of various coating techniques used for the corrosive wear applications. Thermal sprayed thick (from 50 to 3000 μm) coatings, including cold spray coatings are more and more used in industry for the following reasons: (i) They provide specific properties onto substrates which properties are very different from those of the sprayed coating; (ii) They can be applied with rather low or no heat input to substrates (allowing for example spraying ceramics onto polymer substrates); (iii) Virtually any material that melts without decomposition or vaporizing can be sprayed including cermets or very complex metal or ceramic mixtures, allowing tailoring coatings to the wished service property; (iv) Sprayed coatings can be strip off and the worn or damaged coatings re-coated without changing part properties and dimensions; (v) Some spray processes can be moved on site, allowing spraying rapidly big parts, which displacement would otherwise be rather long and expensive.

Published by: Prajapati Amit Kumar, Vaibhav khuranaResearch Area: Surface Engineering

Organisation: LPU, PhagwaraKeywords: Corrosive wear, Corrosion, Thermal spray, Wear.

7. Mechanical Characterization of Thermal Spray Coating on Stainless Steel 316 L

Thermal spray coating process is a surface modification technique in which a coating material like cermets, metallic, ceramic and some other materials in form powder are feed into a torch or a gun, the powder inserted into torch will be melted by high temperature developed by torch. Coating thickness can achieve by applying multiple layer of melted coated material. This paper aims at the study of mechanical characterization of thermal spray single layer and multi-layer coatings. Coatings on SS 316L is followed by the wear test .It has been found that the wear rate of base metal i.e. SS 316L is more than single layer and multi layer coatings. The multilayer has shown the maximum resistance to the wear rate.

Published by: Prajapati Amit Kumar, Vaibhav khuranaResearch Area: Surface Engineering

Organisation: LPU, PhagwaraKeywords: Wear, Single layer, Multi-layer, Wear rate.

8. Manufacturing of Cement from Egg Shell

Effective deployment of bio-waste has been given importance in our society for environmental and economic concerns. Reclamation of eggshell from hatcheries, home, bakeries and industries is an efficient and cost productive way to reduce waste disposal and prevent serious environmental pollution. Egg shells waste constitutes essential organic and inorganic materials that can be composted with other materials for enhancing the pre-existing property. The major concern in any civil sector is efficient construction with minimal cost investment. Cement is one of the pivotal components for construction. It is the backbone to the infrastructure development. Rapid infrastructure developments ensued in high demand for raw materials worldwide that resulted in huge imbalance between demand and supply. However, cement plants are the source of few harmful compounds like nitrogen oxide (NOx), Sulphur dioxide (SO2) and Carbon monoxide (CO) which can cause serious health defects and also affects our environment as well. The cement manufacturing sector is the third largest reason for total pollution in our environment. In spite of all these there is a huge demand for the cements for the development of a country. This increase in demand, led to search for alternative raw materials from enormous waste product which is both efficient and cost productive began. In this work, calcinations of chicken eggshells with different ingredients were carried out and the chemical composition of the resultant product was analyzed.

Published by: Samarth BhardwajResearch Area: Civil Engineering

Organisation: Doon Public School, Dehradun,IndiaKeywords:

9. Micro Structural Characterization of Thermal Spray Coating on Stainless Steel AISI 316 L

Thermal spray coating process is a surface modification technique in which a coating material like cermets, metallic, ceramic and some other materials in form powder are feed into a torch or a gun, the powder inserted into torch will be melted by high temperature developed by torch. Coating thickness can achieve by applying multiple layer of melted coated material. This paper aims at the study of micro structural characterization of thermal spray single layer and multi-layer coatings. Coatings on the substrate were followed by Scanning electron microscopy and X-ray diffraction to know the different phases present in the coated as well uncoated SS 316L. By seeing SEM result it’s found that single layer coating is not done properly. As compared to SEM result of single layer coated AISI 316 L multilayer SEM results is more accurate, there is no crack on the coating surface and there is much less porosity in the multilayer coated sample. AISI 316L contain Nickel (Ni) and Chromium (Cr) as major phase.

Published by: Prajapati Amit Kumar, Vaibhav khuranaResearch Area: Surface Engineering

Organisation: LPU, PhagwaraKeywords: SEM, XRD, Phases, Cracks.

10. Enhancement of MANET Routing by Optimize Centrality with Ant Colony Approach

Vehicular Ad Hoc Network (VANET) is an appearing new technology integrating ad hoc network and improves road traffic safety. The main challenges in VANET is searching, maintaining an effective route for transporting data information. Today some kind of routing protocols used in VANET Hence, an analysis on routing protocols based on number of parameters of VANET i.e. a required issue in communication. AODV routing protocol is also used in VANET. AODV protocol suffers worst performance when it is directly applied in VANET. In this paper, a metaheuristic (Shapley Value) approach is used to reduce the delay information, packet dropped and increasing the throughput.

Published by: Monika Thakur, Saurabh SharmaResearch Area: Computer Science Engineering

Organisation: Sri Sai University, PalampurKeywords: VANET, Routing Protocols, ACO, ONE Simulator.

11. Five Point Key Amalgamation For Secure Authentication (FPKA-SA) In 4G Networks

The 4G networks are the fourth generation cellular networks and result of the long term evolution (LTE) in the cellular networks area. The fourth generation networks are capable of transferring the user data on higher speeds, which insists the users to have video calls, transfer massive amounts of personal data, etc, which stays at risk while being exchanged between the two nodes. Also, 4G is not only limited to the Smart-phone users, but it is being used for the personal computers (Desktops, Laptops, etc.) using the Mi-Fi interfaces, which adds the high risk of data security. In this paper, a novel authentication mechanism is proposed by using the 5-column key architecture to build and transmit the stronger keys between the two cellular nodes. The proposed model has been evaluated on the basis of various network and authentication performance parameters and compared with the existing 4G authentication models. The proposed model has been found better than the existing model, hence proved to be efficient authentication scheme for 4G networks.

Published by: Divyanshu Malhotra, Dr. Paramjeet Singh, Dr. Shaveta RaniResearch Area: Computer Science Engineering

Organisation: GZSCCET, BathindaKeywords: 4G authentication, Complex Key, Five Point Key Authentication, FPKA, Authentication & Authorization.

12. Blur Detection using Hybrid Classifier

Popular entertainment and communication services of internet or mobile applications is multimedia content such as image, audio and video that may suffer from low quality problem. Blur is the one of the factors that degrades the quality of image or frames in video. Enhancement or restoration of blurred image requires detection of blurred region or kernel. Therefore, blur detection is the initial and main step of blur phenomena followed by blur classification and restoration process. In this paper, we presented overview on a few defocus and motion blur detection methods with their applications. Some of this methods based on features of blurred kernel while others not. These methods can be either direct or indirect. Direct methods only identify the blurred region and segment it from un-blurred one. While indirect methods first detect and then restore the blurred region. We discussed both type of blur detection methods.

Published by: Indu Kharb, Abhishek SharmaResearch Area: Electronics and Communication Engineering

Organisation: Maharishi Markandeshwar University, MullanaKeywords: Blur detection, Feature vector, Image enhancement, Restoration and Segmentation.

13. Characterization of Stainless Steel 316l Coated by Thermal Spray Coating

An experimental investigation was carried out to find the characteristic of coated stainless steel 316L, coating was done by D-gun method. Surface properties corrosion is check out of both coated base metals and uncoated base metals. Two specimens of coating were prepared with coating thickness of 150 microns (single layer) and 300 microns (multilayer). Pitting corrosion test was performed to analysis the corrosion properties of base metal and coated metals, corrosion test was conducted for 96 hours and solution used was stagnant seawater composition and temperature of solution was at a temperature of 420C. Characterization of Inconel 718 and stainless steel 316L was done by using XRD. Result obtained from corrosion test showed that weight loss of base metal SS 316L was 0.0002 gram which 50% more than the weight loss in multilayer coated specimens that 0.0001 gram.

Published by: Prajapati Amit Kumar, Sudhanshu Pandey, Ram Mishra, Ramdoot YadavResearch Area: Surface Engineering

Organisation: LPU, PhagwaraKeywords: Corrosive Wear, Xrd, D-Gun Spray

14. Extracting News from the Web Pages by using Concept of Clustering with Neural Genetic Approach

Web news extraction is an investigation area which has been widely discovered. It has resulted in some systems which has good extraction abilities with little or no human involvement. The present system looks into the perception of web newscast from a lone web site which takes a parallel format and the idea commonly is not as efficient when multiple web news pages are considered which belong to different sites. My work proposes a web extraction layout which is rather same for most of the web news The purpose of web news extraction is to enhance information retrieval which stores news articles related to a particular event for competitive business analysis Researches in this area have shown many approaches different from the other based on the need, the extractor should be chosen.

Published by: Nishan Singh Saklani, Saurabh SharmaResearch Area: Computer Science Engineering

Organisation: Sri Sai University Palampur (H.P)Keywords: Clustering, Machine Learning, Genetic Algorithm, Web content mining, Web news extraction, Data pre-processing, Packaged information.

Research Paper

15. Smart Routing for WSN for the Energy Balanced Routing over Hierarchical Deployments

The data aggregation is the method of combining the data streams coming from multiple nodes. The data aggregation methods reduce the routing decision cost implied by the routers or nodes in the path. The data aggregation can be proven to be efficient in the terms of energy efficiency or transmission efficiency. The proposed model is entirely based upon the data aggregation and data forwarding technique. In this research project, we have tried to develop the new era aggregation model for the wireless sensor networks. The proposed model is intended to enhance the energy efficiency by improving the aggregation model for the sensor networks to enable the efficient delivery mechanism for the data. The WSNs are the directed graph networks, where the data is being sent into the similar direction of base transceiver station (BTS). Because the data in the directed graph networks flow in a particular direction, it always made the aggregation method efficient. The proposed model aggregation model is based upon the smart amalgamation of heuristic and greedy algorithm based aggregation. The greedy allows the algorithm to cover any number of data streams for the aggregation, whereas heuristic is responsible for the group formation before going for the aggregation. The multiple aggregation groups are produced by using the heuristic approach in the proposed model. So in this thesis, we have proposed and implemented the smart heuristic and greedy based hybrid aggregation and data transmission algorithm. The proposed model has shown its effectiveness in the case of aggregation and data transmission. The proposed model has proved its efficiency in the form of network load, route persistence, energy consumption and latency. The proposed model has proved it better than the previous approaches on the basis of the above listed performance parameters.

Published by: Namrata Chopra, Manmeet KaurResearch Area: Electronics and Communication Engineering

Organisation: CEC, Landran,MohaliKeywords: data aggregation, energy efficient WSN, wireless sensor network, minimum latency, route adequacy.

16. Hybrid Call Security System using Encryption & Steganography

— Most of the users use internet for various voice or video calling applications. Also many companies utilize these applications for their corporate calls (inbound/outbound business calls) with the users outside of their network. To achieve the goal of voice communication security, a number of audio security and audio processing algorithms are in use individually or in a combination to provide the effective voice security. Hacking attacks on these applications can cause great losses to the user security which can lower the number of active users and so the business popularity. In the proposed voice call security model, we have proposed a hybrid approach using compression, encryption and steganography to enable to highest level of security in the voice calling while adding the minimum possible delay in the voice packets delivery. Our proposed framework focuses on the security of voice communications, which can take place over a wired phone, cellular connection or internet. The proposed framework consists of three major components to secure the voice communications over internet or intranet i.e. band pass filter, cryptography or steganography. The voice signal would be decomposed using band pass filters, followed by application of cryptography on all of the bands. All of the signal components after decomposition and encryption would be combined and placed in another voice signal using steganography method. This framework have designed to provide multi-layer security to the sensitive voice calling channels between the VVIP, VIP and other important personal of the nation. The results have proved the effectiveness of the system. The system had been tested for its security level, possibilities of breaching attacks, accuracy, noise (WGN) reduction, compression levels, encryption levels, elapsed time and many other aspects. The proposed framework has proved to be effective in all situations related to the voice communication security.

Published by: Manmeet Kaur, Samreen Sekhon Brar, Namrata ChopraResearch Area: Electronics and Communication Engineering

Organisation: CEC, Landran,MohaliKeywords: voice call security, encryption, steganography, cellular security, robust cryptography.

17. Statistical Analysis of Part of Speech (Pos) Tagging Algorithms for English Corpus

Part of speech (POS) Tagging is the procedure of allocating the portion of speech tag or supplementary philological class signal to every single and every single word in a sentence. In countless Usual Speech Processing presentations such as word intellect disambiguation, data recovery, data grasping, analyzing, interrogating, and contraption clarification, POS tagging is imitated as the one of the frank obligatory tool. Categorizing the uncertainties in speech philological items is the mystifying goal in the procedure of growing an effectual and correct POS Tagger. n this paper we difference the presentation of a insufficient POS tagging methods for Bangla speech, e.g. statistical way (n-gram, HMM) and perception established approach. A supervised POS tagging way needs a colossal number of annotated training corpuses to tag properly. At this early period of POS-tagging for English. In this work we craft an earth truth set that encompasses tagged words from sampled corpus. We additionally investigated the presentation of POS taggers for disparate kinds of words.

Published by: Swati Tyagi, Gouri Shankar MishraResearch Area: Computer Science Engineering

Organisation: Sharda University Greater Noida, UPKeywords: Part-of-speech tagging, HMM, Unigram, Perceptron.

18. Secure Authentication Based Trust Evaluation In VANETS

Trust and its association are thrilling fields of research. The affluent works producing concerning belief gives us a forceful indication that this is a vital span of research. Belief as a believed has a expansive collection of adaptations and requests, that reasons divergence in belief association terminology. The aim of this paper is to furnish VANETs designers alongside several perspectives on the believed of belief, an understanding of the properties that ought to be believed in growing a belief metric, and visions on how belief can be computed. We commenced this paper by giving assorted definitions of belief and metrics utilized for assessing trust. We next gave a comprehensive survey of assorted belief computing ways, their comparisons alongside respect to assorted attack models and computational requirements. We analyzed assorted literatures on the belief dynamics such as belief propagation, aggregation and predictions. In the end we have endowed a serving detailing the request of belief mechanisms in security. The belief schemes gave in this discover cover an expansive scope of request and are established on countless disparate kinds of mechanisms. There is no solitary resolution that will be suitable in all contexts and applications. As arranging a new belief arrangement, it is vital to ponder the constraints and the kind of data that can be utilized as input by the network. A finished observation is that so distant, the continuing scutiny work and propositions lack completeness. There are vital subjects yet to be addressed.

Published by: Meenu Setia, Mrs. Parul DuaResearch Area: Computer Science Engineering

Organisation: DIET, KarnalKeywords: Vehicular Ad hoc Networks, Sybil Attacks, Position Verification.

19. Enhanced Integrity Preserving Homomorphic scheme for Cloud Storage

As Cloud Calculating becomes prevalent, extra and extra sensitive data are being centralized into the cloud, such as emails, confidential condition records, confidential videos and photos, firm finance data, power documents, etc. By storing their data into the cloud, the data proprietors can be relieved from the burden of data storage and maintenance so as to relish the on-demand elevated quality data storage service. Though, the fact that data proprietors and cloud server are not in the alike trusted area could locale the our sourced data at chance, as the cloud server could no longer be fully trusted in such a cloud nature due to a number of reasons: the cloud server could leak data data to unauthorized entities or be hacked. It follows that sensitive data normally ought to be encrypted prior to outsourcing for data privacy and combating unsolicited accesses. In cloud computing cloud users and cloud ability providers are nearly precise to be from disparate belief domains. Data protection and privacy are the critical subjects for remote data storage. A safeguard user enforced data admission manipulation mechanism have to be endowed beforehand cloud users have the freedom to outsource sensitive data to the cloud for storage. With the rise of allocating confidential company data on cloud servers, it is imperative to accept an effectual encryption arrangement alongside a fine-grained admission manipulation to encrypt outsourced data. Attribute-based encryption is a area key established encryption that enables admission manipulation above encrypted data employing admission strategies and ascribed attributes. In this work, we are going to scutiny homomorphic schemes for encryption and probable resolutions for their limitations.

Published by: Mashkoor Ahmad Kichloo, Mr. Parikshit SinglaResearch Area: Computer Science Engineering

Organisation: DVIET, KarnalKeywords:

20. Optimal 4G and LTE Cellular Tower Placement Strategy

The number of cell phone subscribers is increasing use of cell phones in remote areas and to increase their coverage and to all the places it has sought to expand the network service providers. The cost of placing a cell tower, depending on the height and location, and because it can be very expensive, they have to be placed strategically to reduce the cost. Number of service providers has increased manifold in the last decade and the competition between them is an efficient algorithm for finding a strategic manner is important to place your towers. On such a brilliant connectivity in remote areas as well as extreme at an affordable cost to the service provider can ensure customers. All towers being expensive needs to be placed strategically, to reduce costs. In addition, an optimum height of the tower is to be placed wisely need to be calculated as the height of the tower not only affects the coverage of the tower, but also affects the cost of your appointment. In this context, we come across various complications. For example, the signals to reach some areas as the extent of coverage is distorted due to geographic barriers fail. Thereafter, in any area of potential tower locations to be determined. And only the best and most essential that people in the region are required to cover more and more customers with their respective optimum height is chosen. This can help (FDMA) technology. Manipulating transmission group width, long distance roaming is impossible, and it is merely a congenital mobile contact system

Published by: Rishi Sharma, Er. Anupma DhamijaResearch Area: Computer Science Engineering

Organisation: DVIET, KarnalKeywords:

21. An Analytical Study on Forecasting Model with Special Attention to Gold Price

A commodity with resonance is Gold. Gold is used as currency over the centuries. Gold is unique in nature due to its capability of being hedge in opposition to inflation. It has strong stability to withstand in financial crises and volatility. Although the social esteem that the gold business adds to social orders far and wide, particularly in poorer nations, is less apprehended and frequently distorted. As of late, the worldwide gold cost pattern has pulled in a great deal of consideration and the cost of gold has unnerving spike contrasted with recorded pattern. In times of vulnerability financial specialists consider gold as a fence against unexpected debacles so the fore-casted price of gold has been a subject of most elevated among all. In this research the primary focal area for examination is to add to a determining model for gold price in which this will be done through Box-Jenkins, ARIMA (Auto regressive Integrated Moving Average) and time series analysis model of forecasting. The main objective of this research is to investigate the factors which influencing gold price of the gold market. Data for analysis is collected from variable data sources in order examine the impact and contribution of factors on gold price. Based on identified factors, the gold price will be predicted for upcoming years.

Published by: P .M. Dhanalakshmi, P. R. S. ReddyResearch Area: FORECASTING

Organisation: S.V.University,Tirupati,A.PKeywords:

22. Palm Print: A Bio-Metric for Human Identification

As in today’s life security is main concern. So, a lot of researches are going in the field of security like password, security question, pattern matching and a very important approach is biometric security. So my work is to study about the palm print recognition to identify human. Palm print recognition system has proved its efficiency with many machine learning techniques like LBP, Repeated line tracking, junctions point matching. This paper is a comparison of different techniques. Previous research on palm print shows that palm codes from different palms are similar, with 〖45〗^°streaks.

Published by: Kalyani, Sunil PathaniaResearch Area: Electronics and Communication Engineering

Organisation: Shoolini university ,Solan(H.P.)Keywords:

23. Human Identification using Iris Recognition

Iris recognition is an invasive biometric technique used to identify human being. Iris is defined as annular region between pupil and sclera of human eye which exhibits extraordinary texture that is unique for each individual. Hence, imposes various challenges in accurate iris segmentation and feature extraction techniques to provide many opportunities for researchers in pursuing their research work in this area. This paper presents a study about different techniques used previously for iris recognition.

Published by: Geetanjali Sharma, Neerav MehanResearch Area: Electronics and Communication Engineering

Organisation: Baddi University of Emerging Sciences & Technology, Baddi, H.PKeywords:

24. Traffic Jam and Accident Detection Techniques

Traffic Jam is a crucial problem which is arising day by day in the whole world to overcome this problem many sensors or many algorithms have been developed for the detection of traffic jam. These sensors and algorithm played an important role in Traffic Jam Detection in every region in terms of accuracy, time of detection, signal management. This paper presents a review to the various algorithms proposed in the past.

Published by: Raveena Shaili, Sunil PathaniaResearch Area: Electronics and Communication Engineering

Organisation: Shoolini University , Solan (H.P)Keywords:

25. A Deterministic GPSR Routing Protocol for MANET

In VANET, the communication between the vehicles takes place. As the vehicles are in motion the main aim is to provide efficient routing. The position based routing protocol are better than topology based routing protocol. Working on GPSR routing protocol main aim is to improve the performance of the GPSR as well as E-GPSR routing protocol. The proposed methodology considers the border node and angle of orientation of the neighboring nodes. Based on this the D-GPSR outperforms the GPSR and E-GPSR routing protocol in terms of throughput, average end-to-end delay, packet delivery ratio and network load. D-GPSR is methodology and algorithm is studied in detailed.

Published by: Kumari Monika, Mr. Ashish SharmaResearch Area: Computer Science Engineer

Organisation: Bells Institute of Management Technology,Shimla,(H. P.)Keywords:

26. Adaptive Security Mechanism for Cognitive Radio Communications based on Robust Authentication

Abstract—The fundamental techniques for cognitive radio standard generates the new security threats and the primary operative issues and challenges to wireless communications. Spectrum occupancy failure, the policy management failures, wireless node localization failures, transceiver failures, framework problems and other concerns must be efficiently covered and resolved while imposing the security techniques over the cognitive channels. In the process, it will produce the security threats-primary user emulation attacks. When the attacker has detected the idle spectrum, he will send out a signal similar to the primary user’s signal in this band. The attacks will affect the secondary user detect the idle spectrum and the spectrum cannot be used effectively. An attacker occupies the unused channels by emitting a signal with similar form as the primary user’s signal so as to prevent other secondary users from accessing the vacant frequency bands. The performance of the proposed security model for cognitive radio channels would be evaluated in the performance parameters of security packet volume, authentication delay, security system overhead, etc. The trust and authentication based proposed model will be designed to offer the robust security level over the cognitive channel.

Published by: Mehakpreet Kaur, Navleen KaurResearch Area: Network and Communications

Organisation: CEC Landran, MohaliKeywords:

27. Bandwidth-Aware Stochastic Uplink Scheduling in WIMAX

Abstract—The bandwidth allocation in the WiMAX networking is the primary process which is utilized for the maximum accommodation of the wireless users. The WiMAX link scheduling plays the significant role in managing the resources efficiently and in serving the internet services to the maximum number of users. In this study, the study of various scheduling strategies in wireless mesh network will be carried out. We will focus mainly on the rtps class as it is the most challenging one. A detailed study of HAS and SADP algorithm will be carried out, analyzing its performance for all the service classes. We will then modify the existing algorithm in order to overcome its disadvantages and further improve its performance. The modified algorithm will be implemented in NS2 and simulations will be done. The results obtained from the research will be studied and compared with the previous results and conclusions will be drawn.

Published by: Komaljot Kaur, Amitabh SharmaResearch Area: Network and Communications

Organisation: CEC Landran, MohaliKeywords:

28. Optimized Healthcare Data Management and Critical Handling using Smart Data Categorization Method

ABSTRACT—The cloud based healthcare models are coming to the emergence very quickly and growing their roots across the globe for the empowering of the active healthcare services. The wearable body sensors are utilized to track the health of the patient when they are out of the healthcare premises. Also the telemedicine and remote healthcare monitoring applications has empowered the healthcare systems to grow their roots into the remote areas of the countries, where it becomes the very tough task to provide t he healthcare services or setup the hospitals, dispensaries, etc. The telemedicine practices empower the doctors to remotely monitor the health of the patients and prescribe the best medicines or the precautionary practices. But such healthcare applications suffers from the many performance based issues such as critical data handling, slow data delivery, etc. The healthcare specific network data classification and flow prioritization methods can be utilized to mitigate the healthcare network problems by decongesting the healthcare networks from the heavy loads by smartly optimizing the data outcome on the dominating controller nodes to optimize the healthcare data inflow volumes. The proposed model is expected to solve the problems associated with the existing systems designed for healthcare data management.

Published by: Divisha Poonia, Satvir BajwaResearch Area: Healthcare Monitoring

Organisation: CEC Landran, MohaliKeywords:

29. Implementation of Value Stream Mapping Methodology in Bearing Industry

Lean manufacturing is best way for the reduction of non value added cost, lean word defined by Crafcik in the book of “the machine change the world” in 1988. Lean means thinner and thinner as well as possible to reduce the cost. Lean manufacturing gives the benefit without investment with some modification. In this paper study of the bearing industry at Ahmadabad, Gujarat, for reduction of product lead- time and fulfill the customer demand. This industry is not fulfilling the customer demand and that cause increase the production lead time. In this study to selecting the medium size spherical roller bearing and apply the value stream mapping techniques for detecting the flow of non value added cost with the help of current state map and after analyze it and then prepare a future state map for the proposed implementation. And after apply the entire proposed lean tool and derive the benefits and fulfill the customer demand with reducing lead time.

Published by: Mehul Mayatra, Mr. N.D. Chauhan, Mr. Parthiv Trivedi, Mr. M.N. QureshiResearch Area: Lean Manufacturing

Organisation: SVBIT,Gandhinagar, Gujarat Keywords:

30. Document Image Binarization Technique for Degraded Document Images by using Morphological Operators

Segmentation of badly degraded document images is done for discriminating a text from background images but it is a very challenging task. So, to make a robust document images, till now many binarization techniques are used. But in existing binarization techniques thresholding and filtering is an unsolved problem. In the existing method, edge based segmentation can be done and Canny edge detector used. In our proposed technique, Image Binarzation for degraded document images has being use Region based segmentation. Firstly, an RGB image covert into gray image then image filtering can be done on the basis of Wiener Filtering and Gaussian filter. Secondly, morphological operators use to discriminate foreground from background. Then Otsu and Sauvola’s thresholding did for better results. Finally, proposed method results compare with the method used in DIBCO 2011 dataset. The evaluation based on few parameters like F-measure, PSNR, DRD and MPM. Keywords: Filtering, Morphological operators, and thresholding.

Published by: Divya Jyoti, Bodh Raj, Kapil Kapoor, Arun SharmaResearch Area: Electronics and Communication

Organisation: Abhilashi Group of Institutions School of Pharmacy and Engineering & Technology,Mandi, Chailchowk, Himachal PradeshKeywords: Filtering, Morphological operators, and thresholding.

31. A Novel Hybrid Classification Technique for Blur Detection

Image, audio and video are the popular entertainment and communication services of internet. Sometime they suffer from many problems, Blur is one of them. Blur is a factor that breakdown the status of image. In this paper, we are going to perform comparison of four different Blur Detection classifiers. This paper introduces our proposed technique (Hybrid Classifier). To verify the accuracy of hybrid Classifier we collect 1000 images from internet and hence results are predicted. From result and discussion, it is clear that Proposed Classifier give 96% accuracy which is 10% more than existing Classifier (SVM).

Published by: Indu Kharb, Abhishek SharmaResearch Area: Electronics and Communication Engineering

Organisation: Maharishi Markandeshwar University, Mullana, HaryanaKeywords:

32. Study Of Various Vehicle Detection Techniques –A Review

ABSTRACT- The Vehicle detection is method to detect the vehicles in the image or video data. The vehicle detection is the branch of the object detection, where the vehicle is the primary object. The vehicle detection can be performed on various kinds of the vehicle data obtained from the horizontal, aerial, parking or road surveillance cameras. In this paper, the vehicle detection and classification method has been proposed by using the hybrid deep neural network over the image data and video obtained from the aerial and satellite images to determine the vehicle density. The non-negative matrix factorization (NMF) will be utilized for the feature extraction and compression for the purpose of vehicle detection and classification. The 2nd level feature compression will be performed to create the quick response vehicle detection and classification system. The model will be programmed to detect the maximum vehicles visible as full or partial object in the image. The vehicle density reporting, vehicle movement reporting and upside & downside reporting for highways will be performed to achieve the goal of the vehicle detection and classification. The aim of this review is to produce the robust algorithm to detect and analyze the vehicle features like whether the vehicle is heavy or light in the images and videos with higher accuracy and precision.

Published by: Geetika Garg, Amardeep KaurResearch Area: Computer Science Engineering

Organisation: Punjabi University Regional Centre for Information Technology and Management, Mohali, PunjabKeywords:

Review Paper

33. Review of Diabetes Detection by Machine Learning and Data Mining

The most common action in data mining is classification. It recognizes patterns that describe the group to which an item belongs. It does this by examining existing items that already have been classified and inferring a set of rules. Similar to classification is clustering. The major difference being that no groups have been predefined. Prediction is the construction and use of a model to assess the class of an unlabeled object or to assess the value or value ranges of a given object is likely to have. The next application is forecasting.

Published by: Preeti Verma, Inderpreet Kaur, Jaspreet KaurResearch Area: Computer Science and Engineering

Organisation: Rayat Bahra Group of Institutes, Patiala,PunjabKeywords: Data mining, Diabetes, Classifier, Precision, Machine learning.

Research Paper

34. Software Defect Prediction using Ensemble Learning Survey

Machine learning is a science that explores the building and study of algorithms that can learn from the data. Machine learning process is the union of statistics and artificial intelligence and is closely related to computational statistics. Machine learning takes decisions based on the qualities of the studied data using statistics and adding more advanced artificial intelligence heuristics and algorithms.

Published by: Ramandeep Kaur, Er. Harpreet Kaur, Er. Jaspreet KaurResearch Area: Computer Science and Engineering

Organisation: Rayat Bahra Group of Institutes, Patiala,PunjabKeywords: Software default, Machine learning, Decision making, and Promise dataset.

Review Paper

35. Intrusion Detection System by Machine Learning-A Review

Efficient intrusion detection is needed as a defense of the network system to detect the attacks over the network. A feature selection and classification based Intrusion Detection model is presented, by implementing feature selection, the dimensions of NSL-KDD data set is reduced then by applying machine learning approach, we are able to build Intrusion detection model to find attacks on system and improve the intrusion detection using the captured data. With the increasing number of new unseen attacks the purpose of this model is to develop a system for intrusion detection, and the model will be capable of detecting new and previously unseen attacks using the basic signatures and the features of known attack.

Published by: Aanchal Kumar, Er. Jaspreet Kaur, Er. Inderpreet KaurResearch Area: Computer Science and Engineering

Organisation: Rayat Bahra Group of Institutes, Patiala,PunjabKeywords: Intrusion Detection, Machine lLarning.

36. Multi-level Authentication for Internet of Things to Establish Secure Healthcare Network

The Cloud based healthcare monitoring sensor networks (C-HMSN) consist of a number of wireless nodes connected to each other using wireless connections. As these wireless nodes are connected to base stations so they are highly prone areas for hacking attacks. During data analysis there is need to secure cryptographic keys when the HMSN nodes are in working condition, for secure propagation of the sensitive information. An Efficient corporate key management and distribution scheme is required to maintain the data security in HMSNs. Existing cryptographic key management and distribution technique usually consume higher amount of energy and put larger computational overheads on Wireless Sensor Nodes. The cryptographic keys are used on different levels of HMSN communication i.e. neighbor nodes, cluster heads and base stations. In this paper we will present corporate improved key management architecture, called SECURE KEY EXCHANGE adaptable for the HMSNs, to enable comprehensive, trustworthy, user-verifiable, and cost-effective key management. It allows only authorized applications to use the keys and administrator can remotely issue authenticated commands and verify system output. In addition, it also has to be improved to work with HMSN nodes, which means it must use less computational power of the HMSN. The wireless sensor node should be energy efficient, increasing the life of wireless sensor network.

Published by: Shilpa Kansal, Navpreet KaurResearch Area: Ubiquitous Computing

Organisation: Punjabi University Regional Centre for Information Technology and Management, Mohali, PunjabKeywords: Authentication, IoT security, Multi-level security in IoT, HMSN, Secure key exchange.

37. Hybrid Exemplar-Based Image in Painting Algorithm using Non Local Total Variation Model

Exemplar-based algorithms are a popular technique for image in painting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem we introduction modified exemplar based using non local total variation model which include two main step patch priority and patch completion. Experimental results show the superiority of the proposed method compared to the competitive methods. The proposed method may be used for restoration of digital images of defective or damaged artifacts.

Published by: Preeti Gupta, Kuldip PahwaResearch Area: Electronics and Communication Engineering

Organisation: Maharishi Markandeshwar University, Mullana, HaryanaKeywords: Exemplar, Non-local total variation model in painting.

38. A Review on IEEE-754 Standard Floating Point Arithmetic Unit

Floating point operations in digital systems form an integral part in the design of many digital processors. Digital Signal Processor is the most important application of floating point operations. In the recent years many approaches for floating point operations have been proposed and their merits and demerits are compared. For floating point operations the operands are first converted into IEEE 754 format in either single precision or double precision format. The arithmetic operations are performed on the significant part of the IEEE format. In this paper various floating point operation unit architectures are reviewed. Few designers work on high speed architectures for reducing the delay of the overall circuit while others work on the area utilization parameters. Then the conclusion is drawn based on various architectural analyses.

Published by: Monika Maan, Abhay BindalResearch Area: Electronics and Communication Engineering

Organisation: Maharishi Markandeshwar University, Mullana, HaryanaKeywords: Floating Point Unit, FPGA, IEEE 754.

Review Paper

39. Noise Reduction: A Review

Most of the current speckle reduction system uses various filters .There are various filters for reducing the speckle noise reduction. But due to some drawbacks these traditional filters cannot remove speckle noise efficiently. So a hybrid technique speckle noise reduction using anisotropic filter based on wavelets is used. In this paper the necessary idea of an uncorrupted image from the noisy image is identified as “denoising”. Choosing the most excellent way plays a very important role for getting the desired image. So in this thesis report a study is made on “Speckle Noise reduction using anisotropic filter based on wavelets”. There are various existing techniques to remove the speckle noise reduction but due to some drawbacks these techniques cannot remove speckle noise efficiently.

Published by: Bodh Raj, Arun Sharma, Kapil Kapoor, Divya JyotiResearch Area: Computer Science and Engineering

Organisation: Abiliashi group of institution school of pharmacy and Engineering and Technology, Mandi, H.PKeywords:

Research Paper

40. A Novel Approach for the Reduction of Noise

Most of the current speckle reduction system uses various filters .There are various filters for reducing the speckle noise reduction. But due to some drawbacks these traditional filters cannot remove speckle noise efficiently. So a hybrid technique speckle noise reduction using anisotropic filter based on wavelets is used. In this paper the necessary idea of an uncorrupted image from the noisy image is identified as “denoising”. Choosing the most excellent way plays a very important role for getting the desired image. So in this thesis report a study is made on “Speckle Noise reduction using anisotropic filter based on wavelets”. There are various existing techniques to remove the speckle noise reduction but due to some drawbacks these techniques cannot remove speckle noise efficiently. The adaptive filters that are Kuan filter, Lee filter, and Frost filter are not able to remove a full removal of speckle without losing any edges because they relies on local statistical data and this statistical data depends upon window size and shape. As these existing filters are very much sensitive to the window shape and window size. If the window shape is very much larger than over smoothing will occurs. As the size of window is smaller than the smoothing ability of the window will reduce. So to overcome these limitations, a new hybrid technique that combines wavelet based denoising and anisotropic diffusion filter is proposed. Wavelet is dependent on both frequency and time domain. It is frame based approach. It provides better resolution and it does not depend upon the window size. In addition the anisotropic filter is based on partial differential equation approach.

Published by: Bodh Raj, Arun Sharma, Kapil Kapoor, Divya JyotiResearch Area: Computer Science and Engineering

Organisation: Abiliashi group of institution school of pharmacy and Engineering and Technology, Mandi, H.PKeywords:

41. Automated LED Text Recognition with Neural Network and PCA –A Review

Light-emitting diodes text dot-matrix text (LED text) is being widely used for displaying information and announcements. LED display for modernization of society and catch on for its versatile application with many benefits. Existing paper used k-nearest neighbor(k-NN) approach, low computation complexity method for pattern recognition, is used to recognition character component as any class of character and canny edge was used to detect character pixels when appear in led display area from scene images. The drawback of existing system is that it cannot handle text line with non-uniform color and containing less than 3 characters. It also cannot detect continuous LED text. Our proposed system will utilize the probabilistic neural network (PNN) classification to add the robust classification for the higher level of the adaptiveness. Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. Our proposed system will achieve better detection and recognition rate than existing system.

Published by: Sonu Rani, Navpreet KaurResearch Area: Computer Science Engineering

Organisation: Punjabi University Regional Centre for Information Technology and Management, Mohali, PunjabKeywords: Neural network, PCA, images, Grey Scale, Images.

42. Complex Key Amalgamation Method for Secure Authentication (CKAM-SA) for 4G/LTE Networks

It is normal that a scope of security dangers will develop in 4G remote because of various components including: Takeoff from restrictive working frameworks for handheld gadgets to open and institutionalized working frameworks open nature of the system structural planning and conventions (IP-based). With this move to open conventions and principles, 4G remote systems are currently vulnerable to PC assault methods exhibit on 70 the Internet. Such systems will be progressively powerless against a scope of security assaults, including for instance Malware, Trojans and Viruses. Aside from end-client hardware posturing conventional security dangers, it is normal that new patterns, for example, SPIT (SPAM for VoIP) will likewise turn into a security concern in 4G LTE and WiMAX. Other VoIP-related security dangers are likewise conceivable; for example, SIP enrolment commandeering where the IP location of the criminal is built into the bundle header, along these lines, overwriting the right IP address. In the proposed model, the major focus has been shifted over the robust and secure authentication key mechanism for the 4G/LTE models to add the higher level to the security of the 4G/LTE networks. The complex key mechanism has been designed for the generation of the complex key to add the higher level of security to the 4G/LTE networks channels. The experimental results have justified the performance of the proposed model in the terms of time complexity, uniqueness of the keys, etc.

Published by: ZinniaResearch Area: Network Security

Organisation: Gurukul Vidyapeeth Banur, PunjabKeywords: AKA, Lightweight key management, LTE authentication, Multi-Factor authentication.

43. Implementation of GRP Routing Protocol in MANET

Mobile Adhoc Networks (MANETs) are used most commonly all around the world, because it has the ability to communicate each other without any fixed network. It has the tendency to take decisions on its own that is autonomous state. MANET is generally known for infrastructure less. The bridges in the network are generally known as a base station. In, this paper we simulate GRP routing Protocol, to know the network performance in the MANET.

Published by: Vipin Verma, Saurabh SharmaResearch Area: Computer Science Engineering

Organisation: Sri Sai University Palampur,(H.P.)Keywords: MANET, Routing Protocol, GRP.

44. On the Selection of Optimum Topology for QoS Aware ZigBee-WiMAX based Healthcare Monitoring System

In this work I have proposed different network architectures using OPNET which consists of combination of ZigBee and WiMAX topologies. ZigBee is used to sense the data from the human body whereas WiMAX is used as a backbone to deliver the data at the distant location using microwave links. The nee of using WiMAX with ZigBee arises as ZigBee network’s coverage area is limited to few meters. This paper provides the layout architecture on the healthcare monitoring system developed using different combination of topologies of WiMAX and ZigBee in OPNET. The successful implementation of ZigBee-WiMAX based healthcare network depends upon the performance of the different proposed networks. In this paper the performance comparison is done between six different ZigBee-WiMAX topology combinations which include ZigBee mesh WiMAX mesh, ZigBee star WiMAX mesh, ZigBee tree WiMAX mesh, ZigBee mesh WiMAX Point to Multipoint (P2MP), ZigBee star WiMAX P2MP and ZigBee tree WiMAX P2MP topologies. Through simulations performed in OPNET, QoS parameters like throughput, load and delay have been evaluated to obtain the optimal performance of the proposed system based and to select best suitable topology combination out of the proposed network architectures.

Published by: Karanvir Singh, Jyoteesh MalhotraResearch Area: Network and Communications

Organisation: Guru Nanak Dev University Regional Campus, Jalandhar,PunjabKeywords: OPNET; ZigBee; WiMAX; Microwave; Coverage; Healthcare; Monitoring; Mesh; Star; Tree; P2MP.

45. A Region based Offloading Mechanism in Mobile Cloud Computing Environment

Cloud computing permits the end user to access the required software or hardware structures on demand. This will reduce the cost of installation and maintenance. Mobile Cloud Computing (MCC) is introduced to increase the experience of end user by providing them the services at best. The development of cloud computing and virtualization techniques, enables smart phones to overcome the resource limitation constrained by allowing them to computation offload and transfer several parts of application for computation to very powerful cloud servers. The proposed system is based upon the user’s moving path mobility. It will assume the user’s region to finish the process. The proposed system will reduce the response time as well as improve the load balancing.

Published by: Arshdeep Singh, Neena MadanResearch Area: Mobile Cloud Computing

Organisation: Guru Nanak Dev University Regional Campus, Jalandhar,PunjabKeywords: Mobile-Cloud Computing; MCC; Offloading; Smart Mobile Cloud.

46. Efficient Scheduling by Genetic Algorithm and Simulated Annealing

Multiprocessing is the ability of a system to support more than one processor and the ability to allocate tasks between them. The main advantage of using multiprocessor system is to get more work done in shorter period of time. To improve the efficiency of CPU, we do scheduling by the use of genetic algorithms. Genetic algorithms are powerful and widely applicable stochastic search and optimization methods based on the concepts of natural selection and natural evaluation. Simulated annealing is a generic probabilistic met heuristic for the worldwide optimization issue of finding a great approximation into the global optimum of a given function .In this paper, efficient genetic algorithm and simulated annealing have been proposed for solving the problem of CPU scheduling. The operator which are used for implementing the genetic algorithm such as real value encoding for encoding, Roulette wheel method is used for selection, Uniform crossover operator is used for crossover, interchange for mutation.

Published by: Pooja Nehra, Mr. Sunil AhujaResearch Area: Computer Science and Engineering

Organisation: DIET, Karnal, HaryanaKeywords: Multiprocessor System, Processes, CPU scheduling, Heuristic Methods.

47. Trusted Key Management with RSA based Security Policy for MANETS

A mobile ad-hoc network (MANET) is a wireless communication network, which does not rely on any centralized management or an already existing infrastructure. Various key management authorities distributed over the network, each with a periodically updated share of the secrete key, is usually adopted. Thus many efforts have been made to adapt key management authority’s tasks to the dynamic environments of MANETS and distribute the tasks of among MANET nodes. At present various cryptographic techniques are being deployed to meet the ever-changing needs, which compels to device unique security mechanism for MANET, enabling individual and corporate entities to protect the transmission of data without any intrusion by illegal means. Cryptographic techniques could be either of symmetric key cryptography and or asymmetric key cryptography or hash functions. Symmetric cryptosystem requires the existence of common shared secret key between two communicating nodes whereas asymmetric cryptosystem maintains unique key pair between any two communicating nodes (peers). An asymmetric cryptosystem is more efficient in a given task oriented key utilization process. In this mechanism, the private key needs to be kept secret with one entity but the authenticity of the corresponding public key for the same entity must be guaranteed somehow by a trusted third party in this paper, a novel mutual authentication and key management (agreement) protocol has been developed for one hop communication in mobile ad-hoc networks. The protocol has several salient features like mutual authentication, confidentiality, integrity and key agreement. The protocol utilizes RSA signature generation and verification algorithm.

Published by: Vandana Arora, Sunil AhujaResearch Area: MANET

Organisation: D.V.I.E.T, Karnal, HaryanaKeywords: MANETs, Protocols, Algorithm

48. A Novel Pre-Shared Information Defense Mechanism for Spoofed IP Attack in SDN – A Review

The software defined networks are being used in many scenarios where the ordinary or traditional network management becomes the real problem. Such networks are defined or managed with the SDN platform, which is used as the network programming rather than the network configuration. The problem of user legitimacy is a big issue in the cloud platforms. The user legitimacy assurance is quite important to protect the cloud platforms from several types of attacks. The user legitimacy assurance must be performed on two given events, one is pre-setup, second is post-setup. The existing models incorporate the post-setup phase authentication only, where the pre-setup phase is left immature, where the hackers can easily attack over. In this paper, we are proposing the model for the security of cloud by user legitimacy assurance during the pre-setup phase with the use of pre-shared information in the form or RUID (rigid user ID), which is provided to the user during the registration. The RUID will add the new layer of security by mitigating the threat of user session hijacking, which will make the cloud infrastructure highly secure in comparison with the existing models.

Published by: Manisha Lalotra, Meenakashi Sharma, Gurjeet KaurResearch Area: Software Defined Networking

Organisation: S.S.C.E.T, Badhani,AmritsarKeywords: Defence Mechanism, Spoofed IP Attack, SDN

49. Optimise the Gain of Optical Signal by SOA with Saturated ASE and Unsaturated ASE

Optical Amplifiers are essentials components in long haul fibre optic system. An amplifier is a electronic device that can increase the power of signal. An Optical Amplifier is effectively the opposite of attenuator while Optical Amplifiers provide gain and attenuator provides loss. When a signal travels in a optical fibre medium the signal suffer from various losses such as fibre losses, attenuation losses, fibre splice losses, reduce these losses use the Semiconductor method with saturated and unsaturated Amplified Spontaneous Emission. It reduces the phase shift and recover original signal.

Published by: Shilpa Thakur, Er. Vivek GuptaResearch Area: Electronics and Communication Engineering

Organisation: Rayat Bahra University, MohaliKeywords: Optical fiber communication, Optical Amplifiers, SOA, Self Phase Modulation, Amplified Spontaneous Emission.

50. A Novel Self Organizing Clustering Scheme for Clusters Setup

Wireless Sensor Networks (WSN) consists of nodes with limited power deployed in the area of interest. Nodes cooperate to collect, transmit and forward data to a base station. In WSN, clustering and scheduling techniques ensure collecting data in an energy efficient manner. In this work, we have reviewed many papers relating to clustering and scheduling of sensor network. After reviewing many papers and considering the latest one as the base paper we believe that the work done in it is the latest one, modifications in the work is suggested in this report. This proposal give the basic description of wireless sensor network and their importance in energy efficiency and give a brief about most famous protocol is describes leach and their improved version In this work we have proposed a novel self organizing clustering scheme which considers the real time parameters for setting up the clusters for data collection. Unlike several proposed algorithm, this scheme re-clusters the network only when CH fall below a threshold level. Repeated unnecessary clustering in every round depletes the energy of the network more quickly. We have introduced heterogeneity in the proposed work. By virtue of heterogeneity in terms of energy, lifetime of the network can be extended. An algorithm is functional if the area of interest is covered by active nodes. The period for which the network is functional is termed as persistent period in our work. Simulation results show that the proposed scheme is comparatively more energy efficient, scalable robust and has longer persistent period. And later part of the proposal gives the advantage and disadvantage of these protocols.

Published by: Sashi, Pooja DhankarResearch Area: Network and Communication

Organisation: CBS Group of Institution, Jhajjar, HaryanaKeywords: WSN, LEACH PROTOCOL, OPTIMIZATION TECHNIQUE, DISTRIBUTED PROTOCOL, NETWORK ARCHITECTURE

51. Wireless Sensor Network: A Review

Wireless Sensor Networks (WSN) consists of nodes with limited power deployed in the area of interest. Nodes cooperate to collect, transmit and forward data to a base station. In WSN, clustering and scheduling techniques ensure collecting data in an energy efficient manner. In this work, we have reviewed many papers relating to clustering and scheduling of sensor network. After reviewing many papers and considering the latest one as the base paper we believe that the work done in it is the latest one, modifications in the work is suggested in report. This review give the basic description of wireless sensor network and their importance in energy efficiency and give a brief about most famous protocol is described leach and their improved version.

Published by: Sashi, Pooja DhankarResearch Area: Network and Communication

Organisation: CBS Group of Institution, Jhajjar, HaryanaKeywords: WSN, LEACH PROTOCOL, OPTIMIZATION TECHNIQUE.

52. Breast Cancer: Classification of Breast Masses Mammograms using Artificial Neural Network and Support Vector Machine

This paper presents the diagnosis of breast cancer by using ANN and SVM. To deal with the different kinds of abnormalities causing Cancer, this report consists of all the modalities which help in detecting cancer and well as different methods of feature extraction. Such modalities can be named as: Mammography, Ultrasound, MRI etc [1]. Currently, Electrical impedance and nuclear medicine are used widely for diagnosis. These modalities Based on the image processing i.e. identification of abnormality is done through the reading and retrieving information from images. But this research is based on mammogram images. Before retrieving information one should know about all kinds of abnormalities like: micro classification, masses, architectural distortion, asymmetry, and breast density etc.[2]. And after the process of extracting the abnormal part or can say that ROI (Region of Interest) on which the treatment is applied. To extract ROI various methods are used like region growing, edge detection, segmentation etc. [3][4]. Then, feature extraction is done from which a lot of features are extracted on which feature selection is applied to get higher accuracy. After going through all researches done till now here I have got the conclusion that for determining the presence of cancer researcher uses different features but till now only few researcher used two features named shape and texture which needs good classification technique[1]. Then, classify into classes of normal and abnormal classes. From the statistical study it has been found that the trend in increasing cancer every year, thus, the best and most effective way to cure cancer is the removal of cancerous part.

Published by: Kamaldeep Kaur, Er. PoojaResearch Area: Digital Image Processsing

Organisation: Patiala Institute of Engineering and Technology, PunjabKeywords: Support Vector Machine, Mamogram, Breast Cancer, Machine Learning.