Adapting the "I" to AI
Apr 11, 2019
Garbage in, garbage out
Modern organizations’ data management and analytics solutions are increasingly leaning on artificial intelligence (AI) to decipher the ever-growing volume of data at their disposal. Simultaneously the drivers, side effects, and products of business data are a key factor in predicting market trends, managing operational risk, and evaluating product performance. More than even the most effective human users, AI solutions are extremely effective at forecasting, pattern recognition, classification, and other inferential or rule-based tasks.
However, AI’s effectiveness can be nullified by lackluster data practices. Without solid data management or a well-defined business need for AI, layering machine learning or neural networks between a data warehouse and decision makers is a headache, at best. While examples abound of AI products gone ‘bad’, the scenarios and questions below pose typical problems an organization might face.
|An audit unit migrates from manual case management and tracking to a sleek, off-the-shelf AI and reporting solution for case selection. However, the mainframe extracts that drive the old, manual method still exist as hidden inputs to the new product.|
- Scenario 1: A simple change in data entry guidelines alters data within the mainframe extracts, causing the audit selection models to misinterpret manually keyed cases as audit targets. How does the organization configure the AI and ensure consistent, sanitized inputs to its models?
- Scenario 2: The new solution is eventually phased out in favor of yet another AI product, but all staff familiar with the inputs from the mainframe extracts are long retired. How does the organization manage cases between products or ensure the replacement product is accurate?
- Scenario 3: A constituent files a lawsuit against the organization, claiming racial profiling was used during a recent audit against her. How does the organization ensure an ‘explainable’ path of reasoning used by its audit selection models?
These questions highlight two key takeaways:
First, AI does not solve business problems – it simply makes the problems more manageable. A strong AI implementation fits the iceberg cliché, whereby the visible ‘tip’ of AI – the machine learning, neural networks, deep learning, etc. – only floats because of the significant data foundation and body of work beneath it. A well-implemented AI solution generates value, reduces cost, or mitigates risk by shrinking the volume of business activity needing a human touch. A critical part of this is cleaning, automating, and logging the data pipelines feeding into the solution’s models – ensuring a strong foundation for a streamlined and explainable solution.
Second, AI is not a substitute for business understanding. A well-integrated AI solution should feel intuitive to users, with each conclusion traceable to a path of reasoning. ‘Explainable’ AI is most valuable when it supplements business understanding by identifying strengths or gaps in existing knowledge. In order to maintain an explainable AI solution, users must understand these knowledge gaps and be familiar with how the AI algorithms are addressing them.
Prior to integrating AI into a business toolkit, technical leaders must understand their data and how those data can bring value or reduce risk for their business. A self-explanatory, self-learning tool driving rapid or complex business decisions drastically reduces operational overhead and improves business sustainability in the long run. Organizations poised to benefit from these tools often tap experts to lead the way. This is a natural method of jump-starting these solutions and ensuring a healthy business trajectory, as knowledgeable teams are required to keep up with industry standards and best practices. However, it is imperative that organizations integrate this expertise into their organizations over time, ensuring effectiveness of the implementation, stability of the solution, and explainability of the results.
At the end of the day, ‘AI’ is just the ‘A’; the ‘I’ comes from the supporting engineers, developers, and evaluators. To learn more about how ASR is driving the I in AI, visit our company’s blog.