Friday, October 4, 2019

AI & Big Data: a Lethal Combo

Big data, unstructured or structured, fast or slow, in multiple contexts or one is a beast to manage. Big data is growing fast fueled by the democratization of data and the IoT environment. Often organizations simply control what they know they get results from and then store the rest for future leverage. In fact, most organizations use less than 20% of their data, leaving the remaining 80%, and the insights it contains, to be left outside to the operational and decision-making Processes.  Imagine if you used only 20% of any service, you paid for every month and ignored the other 80%!  This is exactly what we are doing with data.  Fortunately, there is hope as this is where Big Data can start to rely on AI and engage in a “cycle of leverage”. Presently, the interaction between AI and Big Data is in the early stages, and organizations are discovering helpful methods, techniques, and technologies to achieve meaningful results. Typically these efforts are neither architected nor managed holistically. Our work has shown there is an emerging “Cycle of Big Data” that we and would like to describe and share with you where we see AI can help. 

Big Data Cycle

The “Big Data Cycle” is the typical set of functional activities that surround the capture, storage, and consumption of big data. Big data is defined as a field that treats ways to manage, analyze and systematically extract information from, or otherwise deal with, data sets that are too large and complex to be managed with traditional software.  The “Cycle” is, in short, the process of leveraging big data into desired outcomes. Typically the cycle flows in a left to right fashion with iteration.

(Data-> Trigger->Pattern->Context->Decision-> Action-> Outcome->Feedback->Adjustments).

Data Management

Data management is a process that includes acquiring, validating, storing, protecting, and processing the required data to ensure the accessibility, reliability, and timeliness of the data for various users. Today this is a more complicated process due to the increase of speed of data (near real-time) and the increased complexity of the data resources (text, voice, images, and videos).  This situation has had the effect of outstripping the processing capabilities of both humans and traditional computing systems.

AI can assist here in several ways, including assisting with hyper-personalization by leveraging machine learning and profiles that can learn and adapt. AI can also help in the recognition of knowledge from streams of data through NLP categorization and relationship capture. AI can watch static or in motion images to find and manage like knowledge. Not only can AI help recognize and learn by watching human system or machine interactions, but it can also do it in less than an instant. This can be performed either at the edge of the cloud or through an IoT Network.  AI combined with other algorithms can help in finding “black swan events” that can be used to update strategies.

Pattern Management

Organizations need to keep their pulse on incoming signals and events to stay in tune with the current state of the world, industries, markets, customers, and other constituents while sifting out distracting noise events. While savvy organizations that employ strategy planning to actively look for specific patterns of threat and opportunity, unfortunately, most organizations are reactive suffering at the whims of events. Both types of organizations should be continually looking for “patterns of interest” from which to make decisions or to initiate actions that are already defined and stored for execution.

AI can help by recognizing both expected and unexpected signals, events, and patterns to recognize anomalies that might warrant attention potentially.  When combined with analytics, AI can learn and expose the potential for additional responses.  AI also recognizes and learns adaptations for patterns, decision opportunities, and the need for further actions. In some cases, automation opportunities can be identified to deliver faster and higher quality results.  

Context Management

The understanding of data can often change with the context from which it is viewed and the outcome for which it can be leveraged. The “subject” of data can mean something slightly or significantly different in one context versus another.
Understanding the context is as important as understanding the data itself. Information about the context and the interaction of its contents (aka worlds) is essential to capture and maintain.  This allows for a classification of data in context and especially in relation to other contexts as big data sources may contain many contexts and relationships within it.

AI can assist the dynamic computer processes that use “subjects” of data in one context (industry, market, process or application) to point to data resident in a separate (industry, market, process or application) that also contains the same subject.  AI can learn the subtle differences and context-specific nuances to track the evolution of the data’s meaning in multiple contexts, whether it is “interacting” or not. This is particularly useful in understanding conversations and human interactions with NLP as interpretation grids often differ.

Decision Management

Decision management (aka, EDM) has all the aspects of designing, building and managing the automated decision-making systems that an organization uses to manage its decision making processes both internally as well as any interactions with outside parties such as customers, suppliers, vendors, and communities. The impact of decision management is felt in how organizations run their business for the goals of efficiency and effectiveness. Organizations depend on descriptive, prescriptive, and predictive analytics leveraging big data to provide the fuel that drives this environment.

AI can play a crucial role in supercharging knowledge and expertise utilization in a continually evolving and changing world. AI can also help scale key resources by leveraging an ever-growing base of big data at the speed of business that is ever-increasing while supporting today's operational requirements and ensuring its application to the ever-growing user expectations. Specifically, increasing the use of AI in human interactions will be a significant contribution to improving customer experiences and increasing the speed of resolution regarding customer issues.  AI can also suggest where to look for decision opportunities, model decisions, and their outcomes, and actively monitor performance against key performance indicators. 

Action Management

Action management involves planning and organizing the desired proactive or reactive actions and work activities of all humans, processes, bots applications, and devices employed by the organization. It includes managing, coordinating, and orchestrating tasks, developing project plans, monitoring performance, and achieving desired outcomes represented by goals in accordance with approved principles and agreed parameters. The logging of these actions also feeds the big data pools for further analysis and potential optimizations or increased freedom levels through goal adjustments.

AI can help by associating proper actions in the direction of the previous decision steps. It may mean selecting an inventoried action, changing some of the rules/parameters of an inventoried action or suggest the creation of new actions not available in the current inventory. AI can be embedded in any of the steps or detailed tasks that are performed in the selected actions. AI can monitor the actions and report the outcomes to management.  AI, along with algorithms, can pre-test and suggested changed action before deployment, thus ensuring the desired outcome with be achieved.

Goal Management

Goal management is the process of defining and tracking goals to provide guidance and direction, help evaluate performance and give feedback to all resources (humans, processes, applications, bots, and managers) for performance improvement. This also includes the “people-pleasing” and optimization arenas.  As organizations move to implement increased employee empowerment, edge computing, and dynamic bots, the importance of self-directed goal attainment increases. New freedom levels that ratchet-up up autonomy include a heightened focus on goal attainment and monitoring..

AI can help guide autonomous humans, bots, process snippets, apps, and flexible infrastructures through the automatic adjustment of goals that take advantage of edge conditions or “just in time learning” within the guardrails of constraints and rules. All of these resources can receive new guidance from real-time learning AI capabilities either built-in or “externally called” depending on the feedback loops and logs contributing to the big data pools.

Risk Management

Risk management is the identification, evaluation, and prioritization of risks mitigated by the coordinated and intelligent application of resources to minimize, monitor, mitigate, and control the impact of threats.  This will require tapping into the big data pool to continually monitor events and identify emerging threats and opportunities.

AI can help organizations recognize the emergence of situations that might require a response and enable mitigation responses.  Key patterns and anomalies can be recognized in events, patterns, logs of systems, and human feedback (including social networks) for potential or emerging risks. Additionally, any attacks or issues that exist within the perimeter, such as, cultural behavior, can be detected early and the development of necessary defenses enabled.

Net; Net:

Big Data development and management is a core capability that an organization needs to master in order to either become or remain competitive. It is clear to us that AI is the engine that will create value from the ever-increasing Big Data resource.   Big Data has a critical role to play over time as we journey deeper into the new digital world.  AI can handle speed, volume, and change much better than any technology that we have worked with, and this is just what Big Data needs!

For more information see:

This post is a collaboration with Dr. Edward Peters 

Edward M.L. Peters, Ph.D. is an award-winning technology entrepreneur and executive. He is the founder and CEO of Data Discovery Sciences, an intelligent automation services firm located in Dallas, TX.   As an author and media commentator,  Dr. Peters is a frequent contributor on Fox Business Radio and has published articles in  The Financial Times, Forbes, IDB,  and  The Hill. Contact-

Tuesday, October 1, 2019

Art for 3Q 2019

Now that the challenges of the first half of 2019 are behind me, creativity has started to flow again. Here are a couple of fun pieces that I worked on to get my momentum back in the art world. If you would like to see more of my portfolio click here  I have several pieces that I'm working on now for my 4Q 2019 update including a portrait of my late daughter.

I was fortunate enough to be selected to have a piece displayed at the Shermer Art Center for the last month along with a goodly number of pieces from accomplished artists under nature themes. Shemer is at the base of Camelback Mountain, here in the Phoenix metropolitan area. Mr. Turtle was selected from three pieces I submitted.

Black Canvas Pieces

Black Lagoon 

Bright Night 

Digital Pieces

Electric Night

Crystal Cone

Pick Up Sticks

Wednesday, September 18, 2019

AI Digital Assistants Are Working for Knowledge Management

Black & Veatch, an employee-owned, global leader in building critical human infrastructure in energy, water, telecommunications, and government services are on a journey to leverage AI-assisted bots, called virtual experts, to better capture and interact with engineering knowledge and standards. The goal of this emerging effort is to experiment with ways to better capture knowledge and expertise within the company. Ultimately, the initiative would lead to a reduced amount of time required to locate the desired information and create an opportunity for continued innovation to better support the future. Knowledge Management is a difficult problem and has had challenges in the past as a discipline. Click here for my take on KM problems in the past. 

The Problem

Knowledge and standards were generally captured in written form, which led to an abundance of Microsoft Word documents that were difficult to search and a burden to continually refresh. Up to this point, access to content was through best practice document and folder organization and traditional search functions. In addition, it was a difficult challenge to obtain feedback on the type of knowledge professionals were seeking or if they found it. 

The Solution

Black & Veatch began working with AI technology and passed these engineering documents through a natural language processing scan to identify topics eventually stored in a knowledge ontology that would be leveraged in real-time chats initially to answer specific questions for engineers working on a substantial number of projects. The company has a team of 30 digital assistants online today that are being rolled out for general use. The professionals who have been engaged so far are pleased with the results and optimistic about the impact of the technology. While there are no hard metrics, the comments have included positive developments such as reduced searching, better feedback on dated content and engaged knowledge sharing. For the first time, Black & Veatch now has visibility to the actual use and the usefulness of the content with various dashboards captured by monitoring the traffic on the bots.

The Future

Black & Veatch expects to continue to expand its knowledge sharing to include more topics if the success continues with the current pilot. In addition, the company expects the content will become more diverse to include voice, video and image content. These can be used for equipment installation and maintenance training in the future as well.

The case study was made possible by exClone Technology and Methods 

Thursday, September 12, 2019

The Unexpected Consequences of Big Data

Big Data is the unexpected resource bonanza of the current century.  Moore’s Law driven advances in computing power, the rise of cheap storage and advances in algorithm design have enabled the capture, storage, and processing of many types of data previously that were unavailable for use in computing systems.  Documents, email, text messages, audio files, and images are now able to transform into a usable digital format for use by analysis systems, especially artificial intelligence.  The AI systems can scan massive amounts of data and find both patterns and anomalies that were previously unthinkable and do so in a timeframe that was unimaginable.  While most of the uses of Big Data have been coupled with AI/machine learning algorithms so companies and understand their customer's choices and improve their overall experience (think about recommendation engines, chatbots, navigation apps and digital assistants among others) there are uses that are truly industry transforming. 

In healthcare, big data and analytics is helping the industry move from a pay-for-service model that reimburses hospitals, physicians and other caregivers after service was performed to a new approach that reimburse them based on the outcome of the service, specifically the post-service health of the patient.  This approach is only possible if there is enough data to understand how the patient relates to the vast population of other patients who have had the same procedure/service and the expected outcome.  While a variety of other factors, such as the patient’s cooperation with the treatment plan, are involved, those factors can be tracked and analyzed as well, providing a clear path on best practices and expected results based on evidence.  When this is combined with diagnostic improvements made possible by using AI to find patterns in blood and tissue samples or radiology image scanning and anomaly detection, the ability for the physician to determine the exact issue and suggest the best treatment pathway for a given situation is unparalleled.  The result to society for this example is expected to be a dramatic increase in efficiency resulting in a lower cost of service. However, the same technologies that are able to deliver these unparalleled benefits are also capable of providing the platform for a previously unimaginable set of fraudulent uses. 

Examples of Issues

An interesting case of the unexpected occurred in the UK where a group of criminals with very sophisticated knowledge in AI and big data have been able to scam a number of organizations into transferring large sums of money to fraudulent accounts.  According to the BBC, the criminals captured a number of voice recording from CEO’s making investor calls.  They analyzed the voice recordings with an AI pattern -matching program to re-create words and parts of speech.  They then created a new recording in the CEO’s voice directing the CFO to wire funds to a specific account on an emergency basis.  They sent the recording via voice mail to the CFO and even spoofed the CEO’S number. Think of this as an extremely sophisticated fraudulent “robocall” attack using AI to replicate the voice of a known and trusted person sending explicit instructions requiring urgent compliance.  While normally this would not work due to organizational processes and security protections, given the right set of circumstances, it can be successful.  Also, the level of knowledge, time and money it takes to prepare and launch this type of attack limits its ability to be easily replicated.  However as more voice data becomes available and the AI algorithms and techniques become easier to use, we can expect these types of data and technology misuse to become more prevalent.  One can imagine a case where the voice of a loved one in distress is sent to a parent or grandparent looking for some amount of money to be sent immediately to card or account.  Here the same techniques applied over a large population could have devastating results.

Similarly, facial recognition technology has the potential to identify and authenticate people based on using the sophisticated camera technology found in mobile phones and other camera and video recording devices that have become pervasive in our world.  However, few people really understand the limitations of these devices when it comes to accurately identify people under different environmental situations.  In the case of the best commercially available technology the accuracy rate, under sufficient lighting and in a “penned” or confined space, is over 90%. This drops to around 65% if the lighting conditions change or the person is in a place like a mall or an outdoor arena.  Now, add to that the significant error rate that occurs for people with skin tones that are closer in color to their accessories, as well as its inability to accurately recognize a person with a hat, scarf, sunglasses or facial hair, and it is easy to see why communities such as San Francisco have banned its use in law enforcement activities.

Efforts to Consider

So, the question is; what can we do to bet the benefits of AI and big data yet protect ourselves from the downside risk these technologies bring?  First, realize that as the old adage goes, the Genie cannot be put back into the bottle.  We will need to live with and be prepared to manage the risks each of these technologies brings. In our practice, we work with clients to identify the critical data types, decision types and actions/outcomes that require elevated of level protection.   This is a comprehensive effort that results in a digital asset threat matrix with corresponding required actions.  However, everyone or the organization, no matter what the size can start by:

  •       Understanding the types of data both you and your organization have in your possession (images/pictures, text, spreadsheets) and decide what data you are willing to share and under what circumstances. This is particularly important for individual biometric data. Keep engaged with papers and events emerging on the topic of “The Data of You”
  •      Develop specific rules for when you will take actions such as transferring money and who (maybe multiple people) is able to authorize the transaction and under what circumstances
  •          Ask your analytics vendor or analytics team, to show you the tested the current and historic accuracy rate of any software that is used to make critical decisions.  Why would you allow something with a marginal accuracy rate to aid in the decision-making process, especially when dealing with something so important as law enforcement?  This also applies to other analytical software such as blood and urine testing services.   
  •      Safeguard your data in the context of use through tracking, mining and random audits. There are usually trends and tells in the usage of your data internally and externally.
  •       Stay abreast with activities and outcomes from “Deep-Fake” events and publications. The use of AI and Algorithms to fool institutions and individuals are on the rise leading to alternative realities. 

Net; Net: 

Lastly, on an individual level, remember it is your data.  Do not agree to share it with any app or information request, especially on-line lotteries or emails that tell you are a winner, just give us your contact information!  These may be scams and you do not want to end up a victim of the unintended consequences of big data and AI!

For more information see:

This post is a collaboration with Dr. Edward Peters 

Edward M.L. Peters, Ph.D. is an award-winning technology entrepreneur and executive. He is the founder and CEO of Data Discovery Sciences, an intelligent automation services firm located in Dallas, TX.   As an author and media commentator,  Dr. Peters is a frequent contributor on Fox Business Radio and has published articles in  The Financial Times, Forbes, IDB,  and  The Hill. Contact-

Thursday, September 5, 2019

The Power & Speed of Workflow, RPA & Integration

This is a case study that shows the power of Low-code Workflow, RPA, and Integration for a large healthcare insurance company.  It's great to see a case study that enables an organization to enter a market swiftly for a reasonable cost. The power of this combo is illustrated in this video

The Challenge:

When a large, American health insurance company wanted to service a new marketplace that became available after the Affordable Care Act (ACA) was enacted, it found itself tangled in a web of manual, cumbersome internal processes that needed to be digitized, automated and integrated. The company, which wanted to grow this market in less than three months’ time, desperately needed help selling and provisioning insurance since its multi-step customer onboarding process involved several systems, including older, mainframe technology. Penetrating the targeted market effectively was simply beyond reach without a digital overhaul. What this organization needed was to tackle this multi-pronged project: someone to provide improved customer experience and coordinate a long list of processes across disparate technologies. And fast – before missing out on open enrollment for 2019, which started Nov. 1, 2018.

Since the ACA went into effect in 2010, millions of new clients have flooded the insurance market, and many insurance companies have scrambled to revamp their systems to reach this steady stream of customers. Especially since newer, digitally native insurance companies continue popping up to try and snag their share of the business. “Our focus was to create an easy, smooth experience for our customers and sales partners,” said an executive of the large, multimillion-dollar insurance company. “Equally important, we needed to catch up with the rest of the marketplace. We were lagging behind our competition, so we needed to move the needle quickly.” Like many companies undergoing digital transformation, the U.S. insurance provider was trying to leverage both legacy and newer systems, including Robotics Process Automation (RPA), but having difficulties doing so. Therefore, it searched for a solution to help it collect, validate and clean incoming customer data – 75 percent of which was inaccurate or incomplete – to ensure systems’ interoperability with limited manual intervention.

The Solution:

This organization picked an integrated solution that combined a low code workflow capability, with industrial-strength integration and robotic process automation (RPA). The platform orchestrated the data flow processes after the collection and validation of data through the solution’s customer-facing portal. Specifically, the platform delivered workflow automation with five different web service integrations, including the creation of documents, the collection of electronic signatures, and the initiation and monitoring of RPA.  “The platform enabled us to streamline, automate and coordinate processes through multiple mechanisms – not just web services – while removing the manual processing required for everything other than exceptions,” the insurance company executive said. “This allowed us to be open for business 24 hours a day, seven days a week.”

The Results: 

The insurance provider was able to cut the customer onboarding timeline from two to three weeks in half, which has improved relations with insurance brokers and customers and enhanced its overall net promoter score. The platforms orchestration allowed the company to offer a digital, self-service, customer onboarding experience, which was implemented in about 10 weeks – a significantly shorter time than the 5 months the original solution was going to take. Furthermore, the cost for the platform to orchestrate this new process was 10 times less than the initial quote. Constantly looking for ways to improve, automate, and compete, the insurance company hopes its new processes continue to improve so it can reach an even wider market during the next open enrollment.

Net: Net:

In this case,  necessity was the mother of invention. The challenge drove this organization to the powerful combination of Workflow, RPA, and integration. I expect to see more organizations moving to powerful digital platforms of all types that have this powerful combination. See a compelling Infographic by clicking here.  I had a small role in creating this short and sharp video.

This solution was enabled by PMG 

Tuesday, September 3, 2019

Why Knowledge Management Failed Spectacularly

Taking a look back, many have blamed the failure of knowledge management (KM) on the lack of a solid program backed by top management. While these soft issues are normally common factors for failures, there was one primary reason that KM failed in mass. The big mistake was that the knowledge was organized around a taxonomy which was centrally controlled and unresponsive Today there is a combination of new knowledge approaches that will make KM a reality and likely to be backed by management.

The Problems with Taxonomies:

Taxonomies are rigid hierarchies that limit the kind of relations that a topic had to "parent-child" with minor exceptions for multiple inheritances.  This required an overseer that ended up being a bottleneck to organizing knowledge. This kept knowledge trapped from being adaptive in realtime and assumed someone had to manage knowledge acquisition. This taxonomy idea came from the classification of genus and species where it was easier to classify kinds of living things and there was no pressure to complete the task. Limited Taxonomies limit Knowledge Management and the early days leveraged them into dead-end streets.

Opportunities with Ontologies:

Ontologies are easy to expand in that they support real-time change and can support a multitude of knowledge relationships to create a multitude of shapes. They can be reviewed later for accuracy and unnecessary redundancies. The use of flexible and fast ontologies combine well with AI in both learning and reasoning modes. Proven and general use ontologies can be easily combined with specific ontologies to solve both general and specific problem domains. Technology and humans alike can follow ontology paths with ease. In fact, ontologies can support taxonomies within themselves.

Net; Net: 

KM in the early generations got data, information, knowledge, and wisdom structures all wrong. No wonder top management backed away from it especially when it was failing early. Let's not throw out the baby with the bathwater and finally attack knowledge management with the help of AI in all its forms. Big Data is waiting on it and so are a goodly number of business outcomes.

Tuesday, August 27, 2019

Disrupting with AI Assisted Processes

Usually, the economy of scale, size, and reputation win the day, but this spunky and highly adaptive ad agency competes with AI plus better processes to put more pressure on the large ad agencies. I think this will be a trend in many industries with the upstarts starting to scare the incumbent "big dogs." See how industries can be influenced by nimble competitors by clicking here

The Challenge:

The hourly agency model favors longer times and complex hierarchies to create more billable hours, and as a result, more clients are bringing this work in-house. This proves to be an excellent opportunity for our smaller agency to apply technology to outmaneuver the big dog players in the ad business.

This Solution:

By automating the ad creation process by applying AI and automated processes together, big gains are being experienced. This allows for speedier creation of more targeted ads at a lower cost. Since they work with many smaller clients, requiring a stronger fee-to-media ratio, they are able to take many smaller accounts. In other words, AI and process technologies allow a smaller and more nimble firm to compete with the big ad agencies. By offering flat rate packages and turnkey services, enabled by technology, clients know what they will pay. This creates a shared interest to move quickly and efficiently. Once clients experience that power, more targeted ads become a reality.

The Result:

While most agencies produce 3-5 social ads max for their clients, this firm runs an average of 30-50 social ads per month for each client (an order of magnitude). This is all accomplished in days, not weeks or months without a high cost. There is a significant time reduction with the minimum being 37% for a small scope and up to 94% for a broader scope. The average decrease in time spent from a client briefing email to an ad launch with 80 ads is 67%.
For the auto industry, this firm can link up to the inventory quickly and deliver inventory-specific ads that take prospects to a product page on a client's website. The system will automatically generate creative in seconds featuring inputs from the brief, all within a brand-compliant and creative-approved layout. This reduces the creatives needed from 3 to 1, implying significant cost savings.

Net; Net:

This is an example of how digital approaches can help the SMBs compete with the big dogs. This particular lethal combination of process and AI will be a popular approach, going forward. There will be other combinations that will prove to deliver too.

This Case Study was implemented by Constellation Agency