Organizations around the world are looking at new ways to combat fraud, as they recognize that fraud is no longer a cost of doing business but rather fraud is costing businesses to do business. An estimated $3.5 trillion in revenue annually is lost due to fraud, estimates the Association of Certified Fraud Examiners (ACFE).
To combat the fraudsters, more organizations are thinking big--employing new approaches around Big Data to convert the volumes of information now available to them into useful insight and that insight into real action.
With the explosion of social networks over the past few years, Big Data has become a hot topic for business. But it's important to note that Big Data is much more than social media. It is structured and unstructured data residing in databases in multiple geographies. It's text on Web-based forms and PDFs, and it is email and all forms of other documents. Big Data has opened the door to a world of new capabilities that, when deployed appropriately, can help organizations tackle key business challenges, including fraud.
But how and where does an organization begin? Based on the experience of hundreds of businesses--from banks to insurance companies to government tax agencies--five best practices for using Big Data to fight fraud have been identified: Establish a Flexible and Open Central Data Environment; Identify the Knowns and Unknowns; A Picture is Worth 1,000 Words; Not Every Anomaly is Fraud; and Develop Policies and Best Practices.
Establish a Flexible and Open Central Data Environment.
Data can only be valuable if the right people have access to it at the right time. Many financial institutions have found the most successful way to rapidly detect and effectively prevent fraud is by creating a central data analysis environment and applying advanced statistical, entity resolution--and link analysis that can spot trends, patterns and anomalies that could be potential fraud indicators.
With evolving threats, time is of the essence--and if data must be extracted from numerous silos across an enterprise and manually compared by an analyst, time becomes a major inhibitor. Also, there is the undesirable and high probability that key connections can be missed as a result of human error or inadequate entity resolution caused by the high volume and disparate types and formats of data.
By centralizing all their data--which may previously have been stored in multiple locations across several...