Wednesday, September 18, 2019

AI Digital Assistants Are Working for Knowledge Management


Black & Veatch, an employee-owned, global leader in building critical human infrastructure in energy, water, telecommunications, and government services are on a journey to leverage AI-assisted bots, called virtual experts, to better capture and interact with engineering knowledge and standards. The goal of this emerging effort is to experiment with ways to better capture knowledge and expertise within the company. Ultimately, the initiative would lead to a reduced amount of time required to locate the desired information and create an opportunity for continued innovation to better support the future. Knowledge Management is a difficult problem and has had challenges in the past as a discipline. Click here for my take on KM problems in the past. 





The Problem

Knowledge and standards were generally captured in written form, which led to an abundance of Microsoft Word documents that were difficult to search and a burden to continually refresh. Up to this point, access to content was through best practice document and folder organization and traditional search functions. In addition, it was a difficult challenge to obtain feedback on the type of knowledge professionals were seeking or if they found it. 

The Solution

Black & Veatch began working with AI technology and passed these engineering documents through a natural language processing scan to identify topics eventually stored in a knowledge ontology that would be leveraged in real-time chats initially to answer specific questions for engineers working on a substantial number of projects. The company has a team of 30 digital assistants online today that are being rolled out for general use. The professionals who have been engaged so far are pleased with the results and optimistic about the impact of the technology. While there are no hard metrics, the comments have included positive developments such as reduced searching, better feedback on dated content and engaged knowledge sharing. For the first time, Black & Veatch now has visibility to the actual use and the usefulness of the content with various dashboards captured by monitoring the traffic on the bots.


The Future


Black & Veatch expects to continue to expand its knowledge sharing to include more topics if the success continues with the current pilot. In addition, the company expects the content will become more diverse to include voice, video and image content. These can be used for equipment installation and maintenance training in the future as well.

The case study was made possible by exClone Technology and Methods 








Thursday, September 12, 2019

The Unexpected Consequences of Big Data


Big Data is the unexpected resource bonanza of the current century.  Moore’s Law driven advances in computing power, the rise of cheap storage and advances in algorithm design have enabled the capture, storage, and processing of many types of data previously that were unavailable for use in computing systems.  Documents, email, text messages, audio files, and images are now able to transform into a usable digital format for use by analysis systems, especially artificial intelligence.  The AI systems can scan massive amounts of data and find both patterns and anomalies that were previously unthinkable and do so in a timeframe that was unimaginable.  While most of the uses of Big Data have been coupled with AI/machine learning algorithms so companies and understand their customer's choices and improve their overall experience (think about recommendation engines, chatbots, navigation apps and digital assistants among others) there are uses that are truly industry transforming. 



In healthcare, big data and analytics is helping the industry move from a pay-for-service model that reimburses hospitals, physicians and other caregivers after service was performed to a new approach that reimburse them based on the outcome of the service, specifically the post-service health of the patient.  This approach is only possible if there is enough data to understand how the patient relates to the vast population of other patients who have had the same procedure/service and the expected outcome.  While a variety of other factors, such as the patient’s cooperation with the treatment plan, are involved, those factors can be tracked and analyzed as well, providing a clear path on best practices and expected results based on evidence.  When this is combined with diagnostic improvements made possible by using AI to find patterns in blood and tissue samples or radiology image scanning and anomaly detection, the ability for the physician to determine the exact issue and suggest the best treatment pathway for a given situation is unparalleled.  The result to society for this example is expected to be a dramatic increase in efficiency resulting in a lower cost of service. However, the same technologies that are able to deliver these unparalleled benefits are also capable of providing the platform for a previously unimaginable set of fraudulent uses. 

Examples of Issues

An interesting case of the unexpected occurred in the UK where a group of criminals with very sophisticated knowledge in AI and big data have been able to scam a number of organizations into transferring large sums of money to fraudulent accounts.  According to the BBC, the criminals captured a number of voice recording from CEO’s making investor calls.  They analyzed the voice recordings with an AI pattern -matching program to re-create words and parts of speech.  They then created a new recording in the CEO’s voice directing the CFO to wire funds to a specific account on an emergency basis.  They sent the recording via voice mail to the CFO and even spoofed the CEO’S number. Think of this as an extremely sophisticated fraudulent “robocall” attack using AI to replicate the voice of a known and trusted person sending explicit instructions requiring urgent compliance.  While normally this would not work due to organizational processes and security protections, given the right set of circumstances, it can be successful.  Also, the level of knowledge, time and money it takes to prepare and launch this type of attack limits its ability to be easily replicated.  However as more voice data becomes available and the AI algorithms and techniques become easier to use, we can expect these types of data and technology misuse to become more prevalent.  One can imagine a case where the voice of a loved one in distress is sent to a parent or grandparent looking for some amount of money to be sent immediately to card or account.  Here the same techniques applied over a large population could have devastating results.

Similarly, facial recognition technology has the potential to identify and authenticate people based on using the sophisticated camera technology found in mobile phones and other camera and video recording devices that have become pervasive in our world.  However, few people really understand the limitations of these devices when it comes to accurately identify people under different environmental situations.  In the case of the best commercially available technology the accuracy rate, under sufficient lighting and in a “penned” or confined space, is over 90%. This drops to around 65% if the lighting conditions change or the person is in a place like a mall or an outdoor arena.  Now, add to that the significant error rate that occurs for people with skin tones that are closer in color to their accessories, as well as its inability to accurately recognize a person with a hat, scarf, sunglasses or facial hair, and it is easy to see why communities such as San Francisco have banned its use in law enforcement activities.

Efforts to Consider

So, the question is; what can we do to bet the benefits of AI and big data yet protect ourselves from the downside risk these technologies bring?  First, realize that as the old adage goes, the Genie cannot be put back into the bottle.  We will need to live with and be prepared to manage the risks each of these technologies brings. In our practice, we work with clients to identify the critical data types, decision types and actions/outcomes that require elevated of level protection.   This is a comprehensive effort that results in a digital asset threat matrix with corresponding required actions.  However, everyone or the organization, no matter what the size can start by:

  •       Understanding the types of data both you and your organization have in your possession (images/pictures, text, spreadsheets) and decide what data you are willing to share and under what circumstances. This is particularly important for individual biometric data. Keep engaged with papers and events emerging on the topic of “The Data of You”
  •      Develop specific rules for when you will take actions such as transferring money and who (maybe multiple people) is able to authorize the transaction and under what circumstances
  •          Ask your analytics vendor or analytics team, to show you the tested the current and historic accuracy rate of any software that is used to make critical decisions.  Why would you allow something with a marginal accuracy rate to aid in the decision-making process, especially when dealing with something so important as law enforcement?  This also applies to other analytical software such as blood and urine testing services.   
  •      Safeguard your data in the context of use through tracking, mining and random audits. There are usually trends and tells in the usage of your data internally and externally.
  •       Stay abreast with activities and outcomes from “Deep-Fake” events and publications. The use of AI and Algorithms to fool institutions and individuals are on the rise leading to alternative realities. 


Net; Net: 

Lastly, on an individual level, remember it is your data.  Do not agree to share it with any app or information request, especially on-line lotteries or emails that tell you are a winner, just give us your contact information!  These may be scams and you do not want to end up a victim of the unintended consequences of big data and AI!

For more information see:




This post is a collaboration with Dr. Edward Peters 



Edward M.L. Peters, Ph.D. is an award-winning technology entrepreneur and executive. He is the founder and CEO of Data Discovery Sciences, an intelligent automation services firm located in Dallas, TX.   As an author and media commentator,  Dr. Peters is a frequent contributor on Fox Business Radio and has published articles in  The Financial Times, Forbes, IDB,  and  The Hill. Contact- epeters@datadiscoverysciences.com













Thursday, September 5, 2019

The Power & Speed of Workflow, RPA & Integration

This is a case study that shows the power of Low-code Workflow, RPA, and Integration for a large healthcare insurance company.  It's great to see a case study that enables an organization to enter a market swiftly for a reasonable cost. The power of this combo is illustrated in this video


The Challenge:

When a large, American health insurance company wanted to service a new marketplace that became available after the Affordable Care Act (ACA) was enacted, it found itself tangled in a web of manual, cumbersome internal processes that needed to be digitized, automated and integrated. The company, which wanted to grow this market in less than three months’ time, desperately needed help selling and provisioning insurance since its multi-step customer onboarding process involved several systems, including older, mainframe technology. Penetrating the targeted market effectively was simply beyond reach without a digital overhaul. What this organization needed was to tackle this multi-pronged project: someone to provide improved customer experience and coordinate a long list of processes across disparate technologies. And fast – before missing out on open enrollment for 2019, which started Nov. 1, 2018.

Since the ACA went into effect in 2010, millions of new clients have flooded the insurance market, and many insurance companies have scrambled to revamp their systems to reach this steady stream of customers. Especially since newer, digitally native insurance companies continue popping up to try and snag their share of the business. “Our focus was to create an easy, smooth experience for our customers and sales partners,” said an executive of the large, multimillion-dollar insurance company. “Equally important, we needed to catch up with the rest of the marketplace. We were lagging behind our competition, so we needed to move the needle quickly.” Like many companies undergoing digital transformation, the U.S. insurance provider was trying to leverage both legacy and newer systems, including Robotics Process Automation (RPA), but having difficulties doing so. Therefore, it searched for a solution to help it collect, validate and clean incoming customer data – 75 percent of which was inaccurate or incomplete – to ensure systems’ interoperability with limited manual intervention.

The Solution:

This organization picked an integrated solution that combined a low code workflow capability, with industrial-strength integration and robotic process automation (RPA). The platform orchestrated the data flow processes after the collection and validation of data through the solution’s customer-facing portal. Specifically, the platform delivered workflow automation with five different web service integrations, including the creation of documents, the collection of electronic signatures, and the initiation and monitoring of RPA.  “The platform enabled us to streamline, automate and coordinate processes through multiple mechanisms – not just web services – while removing the manual processing required for everything other than exceptions,” the insurance company executive said. “This allowed us to be open for business 24 hours a day, seven days a week.”

The Results: 

The insurance provider was able to cut the customer onboarding timeline from two to three weeks in half, which has improved relations with insurance brokers and customers and enhanced its overall net promoter score. The platforms orchestration allowed the company to offer a digital, self-service, customer onboarding experience, which was implemented in about 10 weeks – a significantly shorter time than the 5 months the original solution was going to take. Furthermore, the cost for the platform to orchestrate this new process was 10 times less than the initial quote. Constantly looking for ways to improve, automate, and compete, the insurance company hopes its new processes continue to improve so it can reach an even wider market during the next open enrollment.

Net: Net:

In this case,  necessity was the mother of invention. The challenge drove this organization to the powerful combination of Workflow, RPA, and integration. I expect to see more organizations moving to powerful digital platforms of all types that have this powerful combination. See a compelling Infographic by clicking here.  I had a small role in creating this short and sharp video.

This solution was enabled by PMG https://www.pmg.net/ 

Tuesday, September 3, 2019

Why Knowledge Management Failed Spectacularly

Taking a look back, many have blamed the failure of knowledge management (KM) on the lack of a solid program backed by top management. While these soft issues are normally common factors for failures, there was one primary reason that KM failed in mass. The big mistake was that the knowledge was organized around a taxonomy which was centrally controlled and unresponsive Today there is a combination of new knowledge approaches that will make KM a reality and likely to be backed by management.




The Problems with Taxonomies:

Taxonomies are rigid hierarchies that limit the kind of relations that a topic had to "parent-child" with minor exceptions for multiple inheritances.  This required an overseer that ended up being a bottleneck to organizing knowledge. This kept knowledge trapped from being adaptive in realtime and assumed someone had to manage knowledge acquisition. This taxonomy idea came from the classification of genus and species where it was easier to classify kinds of living things and there was no pressure to complete the task. Limited Taxonomies limit Knowledge Management and the early days leveraged them into dead-end streets.



Opportunities with Ontologies:

Ontologies are easy to expand in that they support real-time change and can support a multitude of knowledge relationships to create a multitude of shapes. They can be reviewed later for accuracy and unnecessary redundancies. The use of flexible and fast ontologies combine well with AI in both learning and reasoning modes. Proven and general use ontologies can be easily combined with specific ontologies to solve both general and specific problem domains. Technology and humans alike can follow ontology paths with ease. In fact, ontologies can support taxonomies within themselves.


Net; Net: 

KM in the early generations got data, information, knowledge, and wisdom structures all wrong. No wonder top management backed away from it especially when it was failing early. Let's not throw out the baby with the bathwater and finally attack knowledge management with the help of AI in all its forms. Big Data is waiting on it and so are a goodly number of business outcomes.