Tuesday, November 5, 2019

A Digital Assist for Active Shooter Incidents


If you have not heard of active shooter incidents recently, you have been living under a rock. Imagine what 2D floor plans and 3D models can do to assist these situations. Even without intelligent digital assistants, law enforcement can get a better handle on the structure involved with any particular incident. While these digital models are helpful, they are difficult and time consuming to create. This case study is about the evolving scanning, creation, and leverage of visual digital models on a large scale.


The Problem:

Usually, we think of scanning on a small scale in and around smaller sized products. Imagine a much larger scale like all the K-12 schools in the US? According to scanning professionals, to map just the K-12   schools in the U.S., it would take a scanning team, scanning 100,000 square feet per day, seven days a week, a total of 188 years to complete. This estimate addresses only the public schools and does not address any of the many private schools, let alone the post-secondary school facilities.
Robert W. Meyers J.D. from the Entropy Group LLC says ““Active shooter incidents are a growing concerns in the United States, with death tolls, most predominantly in schools, rapidly rising and law enforcement resources stretched beyond breaking point. With the unpredictability of these incidents, both in scale and location, the team at Entropy Group LLC has been working alongside law enforcement and the US attorneys nationwide in order to compress response times, by utilizing 2D floor plans and 3D models.

The Solution:

When confronted with the magnitude of the effort it was immediately obvious to Entropy Group   that we needed to join forces with 3D mobile mapping and monitoring technology specialists, GeoSLAM because their ZEB REVO line of scanners provide the necessary accuracy and are much more time-efficient than other laser scanner technologies. To finalize the proof of the efficacy of the patent filing, Entropy Group LLC recently completed a simulated active shooter incident where six law enforcement officers were tested by responding to a fictitious scenario. Officers were provided a detailed floorplan of the two-building which is currently used as a church and parochial school facility. The structure is quite complex with many classrooms, counseling rooms, worship sanctuary, multi-media studios, cafĂ© area, and church offices.

The Results:

The results of the exercise indicate that officers which have access to the 2D floor plans ahead of time, improve their situational awareness, their confidence in responding to a facility that they have never been to previously, by gaining “facility familiarity” thorough review of floor plans and other data prior to their response. Additionally, response times were documented to decrease by up to 21%. This improvement in response will directly result in fewer deaths and casualties. GeoSlam claims to deliver at least a 10X time reduction in the scanning process.

Net; Net:

The ability to scan large spaces and facilities effectively and efficiently will start to deliver more successful digital implementations including city planning, real estate inventory and another large scale geo problems.

Entropy Group LLC https://entropygroup.net/

Entropy Group LLC is a full-service Forensics and Security Consultancy firm providing services for Executive Protection, Accident Reconstruction, Security Threat Assessments, Building Information Modelling, Security Design Reviews, Security Program Reviews / Audits, Litigation Support, Pre-Travel Security Front Team Assessments, and Access Control Assessments.


Designed for surveyors, engineers and geospatial professionals, and serving the surveying, engineering, mining, forestry, facilities, and asset management sectors, GeoSLAM technology is used globally by anyone needing to create the digital twin of their world, quickly and accurately. GeoSLAM provides geospatial hardware and software solutions provide rapid and easy mapping and highly-accurate monitoring solutions. 



Thursday, October 17, 2019

Fujitsu Bolsters It’s (OCA) Partner Alliance with Strong Digital Vision


Fujitsu relaunched its OCA conference with a new digital vision, strong new products and a commitment for strong partner support. The digital vision was outlined with defining an intelligent content journey starting with powerful new scanners sending the highest quality content to the cloud for categorization, preparation tasks and onto legacy applications or processes thus completing the content journey. This is over and above the typical enterprise content management archiving capabilities available for decades.




The intelligent Journey Is now being bolstered by two new powerful scanners that help the OCA partners reach into the mid-market to add to their Fujitsu’s powerful presence in the enterprise market with a 53% market share worldwide. I observed some of the heavy-duty scanners in action with incredible rates of accurate scanning speeds and the ability to categorize and capture key data elements for further processing options. The first product introduction was fi-7300NX. The second was the fi-800R which supports thick documents in a one-handed push-push mode that has a miniature footprint. 




Fujitsu sees that the future is not just hardware, so it’s linking up to the cloud to allow a content push approach or a process / application pull support. This means that the content of all kinds will be leveraged with smart cloud platforms. Fujitsu provides a super-secure channel for content entry with a one-touch approach, combined with RPA and process vendors, to support straight-through processing. To that end, Fujitsu invited content management, RPA, and process vendors to set up at the OCA conference. Fujitsu is stepping up its support of partners to reach new markets and supporting new digital uses to demonstrate that data capture is still relevant.





Microsoft, a long-time partner, was invited to describe their boost to structured content and unstructured forms of big data by adding an Azure platform growing more intelligent all the time. Ian Story explained the progress Microsoft was attaining to add more intelligence to the point of replicating human function including the senses.





Fujitsu also thinks that RPA will be essential on the content journey and gathered a panel of RPA/Process vendors to contribute their real-world experience with the combination of content and RPA. I was fortunate t moderate the panel and pull stories about speedy ROI and bot momentum. The partners were also encouraged to link up with RPA to extend the content journey beyond just the capture moment. 




Additional Reading on RPA:

Process n RPA
Future of RPA
Top 5 RPA On Ramps


Net; Net:

Capture is the first step in the journey of content. Fujitsu is dominating the enterprise capture market, but will extend its lead by RPA, AI and process to support the extension of that journey. At the same time the OCA partners will be equipped with new midmarket opportunities and support from Fujitsu.   

Friday, October 4, 2019

AI & Big Data: a Lethal Combo

Big data, unstructured or structured, fast or slow, in multiple contexts or one is a beast to manage. Big data is growing fast fueled by the democratization of data and the IoT environment. Often organizations simply control what they know they get results from and then store the rest for future leverage. In fact, most organizations use less than 20% of their data, leaving the remaining 80%, and the insights it contains, to be left outside to the operational and decision-making Processes.  Imagine if you used only 20% of any service, you paid for every month and ignored the other 80%!  This is exactly what we are doing with data.  Fortunately, there is hope as this is where Big Data can start to rely on AI and engage in a “cycle of leverage”. Presently, the interaction between AI and Big Data is in the early stages, and organizations are discovering helpful methods, techniques, and technologies to achieve meaningful results. Typically these efforts are neither architected nor managed holistically. Our work has shown there is an emerging “Cycle of Big Data” that we and would like to describe and share with you where we see AI can help. 




Big Data Cycle


The “Big Data Cycle” is the typical set of functional activities that surround the capture, storage, and consumption of big data. Big data is defined as a field that treats ways to manage, analyze and systematically extract information from, or otherwise deal with, data sets that are too large and complex to be managed with traditional software.  The “Cycle” is, in short, the process of leveraging big data into desired outcomes. Typically the cycle flows in a left to right fashion with iteration.

(Data-> Trigger->Pattern->Context->Decision-> Action-> Outcome->Feedback->Adjustments).

Data Management

Data management is a process that includes acquiring, validating, storing, protecting, and processing the required data to ensure the accessibility, reliability, and timeliness of the data for various users. Today this is a more complicated process due to the increase of speed of data (near real-time) and the increased complexity of the data resources (text, voice, images, and videos).  This situation has had the effect of outstripping the processing capabilities of both humans and traditional computing systems.

AI can assist here in several ways, including assisting with hyper-personalization by leveraging machine learning and profiles that can learn and adapt. AI can also help in the recognition of knowledge from streams of data through NLP categorization and relationship capture. AI can watch static or in motion images to find and manage like knowledge. Not only can AI help recognize and learn by watching human system or machine interactions, but it can also do it in less than an instant. This can be performed either at the edge of the cloud or through an IoT Network.  AI combined with other algorithms can help in finding “black swan events” that can be used to update strategies.


Pattern Management

Organizations need to keep their pulse on incoming signals and events to stay in tune with the current state of the world, industries, markets, customers, and other constituents while sifting out distracting noise events. While savvy organizations that employ strategy planning to actively look for specific patterns of threat and opportunity, unfortunately, most organizations are reactive suffering at the whims of events. Both types of organizations should be continually looking for “patterns of interest” from which to make decisions or to initiate actions that are already defined and stored for execution.

AI can help by recognizing both expected and unexpected signals, events, and patterns to recognize anomalies that might warrant attention potentially.  When combined with analytics, AI can learn and expose the potential for additional responses.  AI also recognizes and learns adaptations for patterns, decision opportunities, and the need for further actions. In some cases, automation opportunities can be identified to deliver faster and higher quality results.  

Context Management

The understanding of data can often change with the context from which it is viewed and the outcome for which it can be leveraged. The “subject” of data can mean something slightly or significantly different in one context versus another.
Understanding the context is as important as understanding the data itself. Information about the context and the interaction of its contents (aka worlds) is essential to capture and maintain.  This allows for a classification of data in context and especially in relation to other contexts as big data sources may contain many contexts and relationships within it.

AI can assist the dynamic computer processes that use “subjects” of data in one context (industry, market, process or application) to point to data resident in a separate (industry, market, process or application) that also contains the same subject.  AI can learn the subtle differences and context-specific nuances to track the evolution of the data’s meaning in multiple contexts, whether it is “interacting” or not. This is particularly useful in understanding conversations and human interactions with NLP as interpretation grids often differ.

Decision Management

Decision management (aka, EDM) has all the aspects of designing, building and managing the automated decision-making systems that an organization uses to manage its decision making processes both internally as well as any interactions with outside parties such as customers, suppliers, vendors, and communities. The impact of decision management is felt in how organizations run their business for the goals of efficiency and effectiveness. Organizations depend on descriptive, prescriptive, and predictive analytics leveraging big data to provide the fuel that drives this environment.

AI can play a crucial role in supercharging knowledge and expertise utilization in a continually evolving and changing world. AI can also help scale key resources by leveraging an ever-growing base of big data at the speed of business that is ever-increasing while supporting today's operational requirements and ensuring its application to the ever-growing user expectations. Specifically, increasing the use of AI in human interactions will be a significant contribution to improving customer experiences and increasing the speed of resolution regarding customer issues.  AI can also suggest where to look for decision opportunities, model decisions, and their outcomes, and actively monitor performance against key performance indicators. 

Action Management

Action management involves planning and organizing the desired proactive or reactive actions and work activities of all humans, processes, bots applications, and devices employed by the organization. It includes managing, coordinating, and orchestrating tasks, developing project plans, monitoring performance, and achieving desired outcomes represented by goals in accordance with approved principles and agreed parameters. The logging of these actions also feeds the big data pools for further analysis and potential optimizations or increased freedom levels through goal adjustments.

AI can help by associating proper actions in the direction of the previous decision steps. It may mean selecting an inventoried action, changing some of the rules/parameters of an inventoried action or suggest the creation of new actions not available in the current inventory. AI can be embedded in any of the steps or detailed tasks that are performed in the selected actions. AI can monitor the actions and report the outcomes to management.  AI, along with algorithms, can pre-test and suggested changed action before deployment, thus ensuring the desired outcome with be achieved.


Goal Management

Goal management is the process of defining and tracking goals to provide guidance and direction, help evaluate performance and give feedback to all resources (humans, processes, applications, bots, and managers) for performance improvement. This also includes the “people-pleasing” and optimization arenas.  As organizations move to implement increased employee empowerment, edge computing, and dynamic bots, the importance of self-directed goal attainment increases. New freedom levels that ratchet-up up autonomy include a heightened focus on goal attainment and monitoring..

AI can help guide autonomous humans, bots, process snippets, apps, and flexible infrastructures through the automatic adjustment of goals that take advantage of edge conditions or “just in time learning” within the guardrails of constraints and rules. All of these resources can receive new guidance from real-time learning AI capabilities either built-in or “externally called” depending on the feedback loops and logs contributing to the big data pools.

Risk Management

Risk management is the identification, evaluation, and prioritization of risks mitigated by the coordinated and intelligent application of resources to minimize, monitor, mitigate, and control the impact of threats.  This will require tapping into the big data pool to continually monitor events and identify emerging threats and opportunities.

AI can help organizations recognize the emergence of situations that might require a response and enable mitigation responses.  Key patterns and anomalies can be recognized in events, patterns, logs of systems, and human feedback (including social networks) for potential or emerging risks. Additionally, any attacks or issues that exist within the perimeter, such as, cultural behavior, can be detected early and the development of necessary defenses enabled.

Net; Net:

Big Data development and management is a core capability that an organization needs to master in order to either become or remain competitive. It is clear to us that AI is the engine that will create value from the ever-increasing Big Data resource.   Big Data has a critical role to play over time as we journey deeper into the new digital world.  AI can handle speed, volume, and change much better than any technology that we have worked with, and this is just what Big Data needs!

For more information see:





This post is a collaboration with Dr. Edward Peters 



Edward M.L. Peters, Ph.D. is an award-winning technology entrepreneur and executive. He is the founder and CEO of Data Discovery Sciences, an intelligent automation services firm located in Dallas, TX.   As an author and media commentator,  Dr. Peters is a frequent contributor on Fox Business Radio and has published articles in  The Financial Times, Forbes, IDB,  and  The Hill. Contact- epeters@datadiscoverysciences.com



Tuesday, October 1, 2019

Art for 3Q 2019

Now that the challenges of the first half of 2019 are behind me, creativity has started to flow again. Here are a couple of fun pieces that I worked on to get my momentum back in the art world. If you would like to see more of my portfolio click here  I have several pieces that I'm working on now for my 4Q 2019 update including a portrait of my late daughter.

I was fortunate enough to be selected to have a piece displayed at the Shermer Art Center for the last month along with a goodly number of pieces from accomplished artists under nature themes. Shemer is at the base of Camelback Mountain, here in the Phoenix metropolitan area. Mr. Turtle was selected from three pieces I submitted.



Black Canvas Pieces


Black Lagoon 


Bright Night 

Digital Pieces


Electric Night


Crystal Cone


Pick Up Sticks




Wednesday, September 18, 2019

AI Digital Assistants Are Working for Knowledge Management


Black & Veatch, an employee-owned, global leader in building critical human infrastructure in energy, water, telecommunications, and government services are on a journey to leverage AI-assisted bots, called virtual experts, to better capture and interact with engineering knowledge and standards. The goal of this emerging effort is to experiment with ways to better capture knowledge and expertise within the company. Ultimately, the initiative would lead to a reduced amount of time required to locate the desired information and create an opportunity for continued innovation to better support the future. Knowledge Management is a difficult problem and has had challenges in the past as a discipline. Click here for my take on KM problems in the past. 





The Problem

Knowledge and standards were generally captured in written form, which led to an abundance of Microsoft Word documents that were difficult to search and a burden to continually refresh. Up to this point, access to content was through best practice document and folder organization and traditional search functions. In addition, it was a difficult challenge to obtain feedback on the type of knowledge professionals were seeking or if they found it. 

The Solution

Black & Veatch began working with AI technology and passed these engineering documents through a natural language processing scan to identify topics eventually stored in a knowledge ontology that would be leveraged in real-time chats initially to answer specific questions for engineers working on a substantial number of projects. The company has a team of 30 digital assistants online today that are being rolled out for general use. The professionals who have been engaged so far are pleased with the results and optimistic about the impact of the technology. While there are no hard metrics, the comments have included positive developments such as reduced searching, better feedback on dated content and engaged knowledge sharing. For the first time, Black & Veatch now has visibility to the actual use and the usefulness of the content with various dashboards captured by monitoring the traffic on the bots.


The Future


Black & Veatch expects to continue to expand its knowledge sharing to include more topics if the success continues with the current pilot. In addition, the company expects the content will become more diverse to include voice, video and image content. These can be used for equipment installation and maintenance training in the future as well.

The case study was made possible by exClone Technology and Methods 








Thursday, September 12, 2019

The Unexpected Consequences of Big Data


Big Data is the unexpected resource bonanza of the current century.  Moore’s Law driven advances in computing power, the rise of cheap storage and advances in algorithm design have enabled the capture, storage, and processing of many types of data previously that were unavailable for use in computing systems.  Documents, email, text messages, audio files, and images are now able to transform into a usable digital format for use by analysis systems, especially artificial intelligence.  The AI systems can scan massive amounts of data and find both patterns and anomalies that were previously unthinkable and do so in a timeframe that was unimaginable.  While most of the uses of Big Data have been coupled with AI/machine learning algorithms so companies and understand their customer's choices and improve their overall experience (think about recommendation engines, chatbots, navigation apps and digital assistants among others) there are uses that are truly industry transforming. 



In healthcare, big data and analytics is helping the industry move from a pay-for-service model that reimburses hospitals, physicians and other caregivers after service was performed to a new approach that reimburse them based on the outcome of the service, specifically the post-service health of the patient.  This approach is only possible if there is enough data to understand how the patient relates to the vast population of other patients who have had the same procedure/service and the expected outcome.  While a variety of other factors, such as the patient’s cooperation with the treatment plan, are involved, those factors can be tracked and analyzed as well, providing a clear path on best practices and expected results based on evidence.  When this is combined with diagnostic improvements made possible by using AI to find patterns in blood and tissue samples or radiology image scanning and anomaly detection, the ability for the physician to determine the exact issue and suggest the best treatment pathway for a given situation is unparalleled.  The result to society for this example is expected to be a dramatic increase in efficiency resulting in a lower cost of service. However, the same technologies that are able to deliver these unparalleled benefits are also capable of providing the platform for a previously unimaginable set of fraudulent uses. 

Examples of Issues

An interesting case of the unexpected occurred in the UK where a group of criminals with very sophisticated knowledge in AI and big data have been able to scam a number of organizations into transferring large sums of money to fraudulent accounts.  According to the BBC, the criminals captured a number of voice recording from CEO’s making investor calls.  They analyzed the voice recordings with an AI pattern -matching program to re-create words and parts of speech.  They then created a new recording in the CEO’s voice directing the CFO to wire funds to a specific account on an emergency basis.  They sent the recording via voice mail to the CFO and even spoofed the CEO’S number. Think of this as an extremely sophisticated fraudulent “robocall” attack using AI to replicate the voice of a known and trusted person sending explicit instructions requiring urgent compliance.  While normally this would not work due to organizational processes and security protections, given the right set of circumstances, it can be successful.  Also, the level of knowledge, time and money it takes to prepare and launch this type of attack limits its ability to be easily replicated.  However as more voice data becomes available and the AI algorithms and techniques become easier to use, we can expect these types of data and technology misuse to become more prevalent.  One can imagine a case where the voice of a loved one in distress is sent to a parent or grandparent looking for some amount of money to be sent immediately to card or account.  Here the same techniques applied over a large population could have devastating results.

Similarly, facial recognition technology has the potential to identify and authenticate people based on using the sophisticated camera technology found in mobile phones and other camera and video recording devices that have become pervasive in our world.  However, few people really understand the limitations of these devices when it comes to accurately identify people under different environmental situations.  In the case of the best commercially available technology the accuracy rate, under sufficient lighting and in a “penned” or confined space, is over 90%. This drops to around 65% if the lighting conditions change or the person is in a place like a mall or an outdoor arena.  Now, add to that the significant error rate that occurs for people with skin tones that are closer in color to their accessories, as well as its inability to accurately recognize a person with a hat, scarf, sunglasses or facial hair, and it is easy to see why communities such as San Francisco have banned its use in law enforcement activities.

Efforts to Consider

So, the question is; what can we do to bet the benefits of AI and big data yet protect ourselves from the downside risk these technologies bring?  First, realize that as the old adage goes, the Genie cannot be put back into the bottle.  We will need to live with and be prepared to manage the risks each of these technologies brings. In our practice, we work with clients to identify the critical data types, decision types and actions/outcomes that require elevated of level protection.   This is a comprehensive effort that results in a digital asset threat matrix with corresponding required actions.  However, everyone or the organization, no matter what the size can start by:

  •       Understanding the types of data both you and your organization have in your possession (images/pictures, text, spreadsheets) and decide what data you are willing to share and under what circumstances. This is particularly important for individual biometric data. Keep engaged with papers and events emerging on the topic of “The Data of You”
  •      Develop specific rules for when you will take actions such as transferring money and who (maybe multiple people) is able to authorize the transaction and under what circumstances
  •          Ask your analytics vendor or analytics team, to show you the tested the current and historic accuracy rate of any software that is used to make critical decisions.  Why would you allow something with a marginal accuracy rate to aid in the decision-making process, especially when dealing with something so important as law enforcement?  This also applies to other analytical software such as blood and urine testing services.   
  •      Safeguard your data in the context of use through tracking, mining and random audits. There are usually trends and tells in the usage of your data internally and externally.
  •       Stay abreast with activities and outcomes from “Deep-Fake” events and publications. The use of AI and Algorithms to fool institutions and individuals are on the rise leading to alternative realities. 


Net; Net: 

Lastly, on an individual level, remember it is your data.  Do not agree to share it with any app or information request, especially on-line lotteries or emails that tell you are a winner, just give us your contact information!  These may be scams and you do not want to end up a victim of the unintended consequences of big data and AI!

For more information see:




This post is a collaboration with Dr. Edward Peters 



Edward M.L. Peters, Ph.D. is an award-winning technology entrepreneur and executive. He is the founder and CEO of Data Discovery Sciences, an intelligent automation services firm located in Dallas, TX.   As an author and media commentator,  Dr. Peters is a frequent contributor on Fox Business Radio and has published articles in  The Financial Times, Forbes, IDB,  and  The Hill. Contact- epeters@datadiscoverysciences.com













Thursday, September 5, 2019

The Power & Speed of Workflow, RPA & Integration

This is a case study that shows the power of Low-code Workflow, RPA, and Integration for a large healthcare insurance company.  It's great to see a case study that enables an organization to enter a market swiftly for a reasonable cost. The power of this combo is illustrated in this video


The Challenge:

When a large, American health insurance company wanted to service a new marketplace that became available after the Affordable Care Act (ACA) was enacted, it found itself tangled in a web of manual, cumbersome internal processes that needed to be digitized, automated and integrated. The company, which wanted to grow this market in less than three months’ time, desperately needed help selling and provisioning insurance since its multi-step customer onboarding process involved several systems, including older, mainframe technology. Penetrating the targeted market effectively was simply beyond reach without a digital overhaul. What this organization needed was to tackle this multi-pronged project: someone to provide improved customer experience and coordinate a long list of processes across disparate technologies. And fast – before missing out on open enrollment for 2019, which started Nov. 1, 2018.

Since the ACA went into effect in 2010, millions of new clients have flooded the insurance market, and many insurance companies have scrambled to revamp their systems to reach this steady stream of customers. Especially since newer, digitally native insurance companies continue popping up to try and snag their share of the business. “Our focus was to create an easy, smooth experience for our customers and sales partners,” said an executive of the large, multimillion-dollar insurance company. “Equally important, we needed to catch up with the rest of the marketplace. We were lagging behind our competition, so we needed to move the needle quickly.” Like many companies undergoing digital transformation, the U.S. insurance provider was trying to leverage both legacy and newer systems, including Robotics Process Automation (RPA), but having difficulties doing so. Therefore, it searched for a solution to help it collect, validate and clean incoming customer data – 75 percent of which was inaccurate or incomplete – to ensure systems’ interoperability with limited manual intervention.

The Solution:

This organization picked an integrated solution that combined a low code workflow capability, with industrial-strength integration and robotic process automation (RPA). The platform orchestrated the data flow processes after the collection and validation of data through the solution’s customer-facing portal. Specifically, the platform delivered workflow automation with five different web service integrations, including the creation of documents, the collection of electronic signatures, and the initiation and monitoring of RPA.  “The platform enabled us to streamline, automate and coordinate processes through multiple mechanisms – not just web services – while removing the manual processing required for everything other than exceptions,” the insurance company executive said. “This allowed us to be open for business 24 hours a day, seven days a week.”

The Results: 

The insurance provider was able to cut the customer onboarding timeline from two to three weeks in half, which has improved relations with insurance brokers and customers and enhanced its overall net promoter score. The platforms orchestration allowed the company to offer a digital, self-service, customer onboarding experience, which was implemented in about 10 weeks – a significantly shorter time than the 5 months the original solution was going to take. Furthermore, the cost for the platform to orchestrate this new process was 10 times less than the initial quote. Constantly looking for ways to improve, automate, and compete, the insurance company hopes its new processes continue to improve so it can reach an even wider market during the next open enrollment.

Net: Net:

In this case,  necessity was the mother of invention. The challenge drove this organization to the powerful combination of Workflow, RPA, and integration. I expect to see more organizations moving to powerful digital platforms of all types that have this powerful combination. See a compelling Infographic by clicking here.  I had a small role in creating this short and sharp video.

This solution was enabled by PMG https://www.pmg.net/