Many organizations are planning to or have already exploited RFID to achieve more automation, efficient business processes, and inventory visibility. For instance, after implementing RFID system, Wal-Mart reportedly selleck chemicals llc reduced out-of-stocks by 30 percent on average at the selected stores.While RFID provides promising benefits in many applications, there are issues that must be overcome before these benefits can be fully realized. One major hurdle to be fully addressed is the problem of duplicate readings generated dynamically in very large data streams. The problem of duplicate readings in RFID is a serious issue that needs an efficient approach to solve it [1]. While the current RFID reader accuracy is improving, redundant data transmission within the network still occurs in RFID systems.
Some of the factors that contribute to the duplicate data generation include unreliability of the readers and duplicate readings generated by adjacent readers. A reading is defined as duplicate when it is repeated and does not deliver new information to the system. Duplicate readings unnecessarily consume system resources and impose traffic burdens on the system. Because RFID-enabled applications primarily use RFID data to automate business processes, inaccurate and duplicate readings could misguide application users [2]. Therefore, RFID data must be processed to filter out duplicates before an application can use it.Although, there are many approaches in the literature to filter duplicate readings [3�C6], most of the existing approaches focus on data level filtering.
They also tend to have high computation costs and they do not reduce much the transmission overhead. Moreover, they tend to focus on a single RFID reader system whereas we focus on a system with multiple readers. Many applications used multiple readers for different purposes including increasing the reading ability [7]; reading Cilengitide objects passing by different doors at the warehouse [8]; and supply chain management [9]. However, the use of multiple either readers creates duplicate readings, where readers perform readings on the same tagged object.In this paper, we propose an approach to detect and remove duplicate readings at a reader level. The contribution of this paper can be summarized as follows: (a) an approach to detect and remove duplicate readings from a RFID data stream, and (b) experimental analysis of the proposed approach and comparison with sliding windows-based approach [3] and few other approaches that were proposed in [5,10]. The results show that our proposed approach demonstrates superior performance as compared to the other baseline approaches. The proposed approach is an extension of our previous work [6] in two ways.