Showing posts with label IoT. Show all posts
Showing posts with label IoT. Show all posts

Friday, August 1, 2025

New Book to Guide Your Digital & AI Journeys

 Organizations face numerous opportunities and challenges in leveraging both maturing and emerging technologies. The authors of a new and exciting book believe that processes and customer journeys are a great place to innovate, leverage, and gain traction in thriving and capitalizing with AI for both planning processes ahead of time and depicting the behaviors and actions of AI in a process model. This new book demonstrates how processes can facilitate the creation, completion, and verification of desired outcomes. Please click here for a link to Amazon, where you can purchase a copy for your reference. 

Additional books of interest include:

Business Process Management: The Next Wave 

Digital Transformation: A Brief Guide for Game Changers




Thursday, April 4, 2024

Music Helps with Life

“Ready or Not” is the name of my second album, which is about being a better human being, which we all struggle with on a daily basis. Ethan Foxx is my co-writer/creative producer/guitar/drums, and Jimmy (CAT) Caterine, our creative engineer and lead guitarist, are my mentors on my musical journey. Our team was assisted by Pete Crane, with great bass playing. Excellent strings by Cathie King. Significant keyboard boosts and a terrific trumpet solo from Eric Barker. A lovely harmonica outro by Robert Vincent. A steaming blues lead guitar and solo by the talented Don Supplee. Lead vocals by me. We only hope you enjoy the music, so take some time to listen to your favorite streaming source or listen to a quick summary by clicking here







So far, the most popular songs are Forgive, Mercy Me, and Kinder.  If you want to hear my granddaughter sing, go for Cry for Creation. If you want something different, listen to Restoration. If you're going to understand my attitude toward life, try Thankful Now. 

Related Gen AI Music Videos:




Tuesday, February 27, 2024

Top 5 Technology Trends for 2024


Last week, I published the Top 5 Business Trends for 2024 (click here), and this week, I narrowed down several technology trends to my top 5 that organizations need to start responding to intensely in 2024.

Harnessing Usable AI

Most organizations will probably have some form of the many types of AI in progress. Progress could range from experimentation to production-enabled and active in several business and technology domains. Since organizations do not fear another AI Winter because of broad-based data-driven successes, they are looking to take advantage of various kinds of AI (click here for AI Tributaries and Types for 2024). Significant efforts in and around Natural Language Processing (NLP) will allow for human understanding and appropriate responses like generating human-like interfaces in chatbots and language translation services, for example. There will be more virtual assistants that will supercharge customers and employees to be more effective even beyond their inherent knowledge and skill levels. It will expand AI to voice, image, and video analysis to create a more inclusive context for decisions and actions for carbon-based participants and robotic assistants. There will be an emphasis on emotion recognition to deal with the human factors of doing business. This new capability and power will need to be protected, so intelligent cybersecurity will get a boost to detect and prevent threats leveraging AI. Expect organizations to use AI until governance issues become the focus.




Leveraging Intelligent Customer Experiences and Processes/Applications

Organizations will likely start switching from flow-directed approaches to goal-directed ones where the flow is based on the changing goals of a customer journey or process. Savvy organizations will include their goals with the goals of customers, partners, and employees in the goal-directed approaches and balance seemingly conflicting objectives in a balanced approach. Personalization will now consider goals and measure feedback through real-time observation and analysis. Of course, better user experiences and omni-channel experiences will continue as table stakes, but more will be demanded. User-centered design employing more gamification components will play a role as AI and algorithms will expand their reach to customers, employees, and partners to advance Customer Relationship Management (CRM). Human/ tech collaboration will get a fresh look, including new forms of augmented reality over time. Continuous improvement and aggressive automation will continue in times of stability; however, changing conditions may unhinge current optimization patterns. Intelligence will be used to adapt processes and user experiences more acceleratedly. Organizations will leverage predictive methods and more aggressive scenario management and monitoring. It will be a necessity with supply chain shifts and optimization particularly.

Moving to Convergent Business and Technology Platforms

While individual technology stacks bring benefits, costs, and challenges, organizations will eagerly watch for the convergence of focused functionality into platforms that more easily integrate technology functions to enable faster and cheaper business results. Desire will force broader technology options at a more affordable cost and potential mergers and buyouts. Convergence will create aggregated specialty platforms and generalized digital business platforms. The effect is fewer vendors to manage for organizations and more integrated business/technical functionality. Examples include generalized Digital Business Platforms (DBP), Business Application/Package Platforms, Sales/Customer Platforms, Process Platforms, Collaboration Platforms, Data Science/Analytic Platforms, Automation Platforms, Lowcode Platforms, Cloud Platforms, Data Mesh Platforms, and Security Platforms. For a quick overview of the players, click here. I expect AI platforms to emerge as success is experienced and integration becomes necessary.

Building on Intelligent Infrastructure

As all business-driven intelligence and agility become a competitive weapon, the need for intelligent infrastructure will emerge quickly. It means that the infrastructure players that leverage AI and analytics in either a reactive or proactive manner will flourish. It will create a race to intelligence under the covers of processes, systems, and applications. Edge computing and IoT integration are perfect examples of where putting intelligence at the edge or even outside of a business process will be necessary. First, it will be monitored soon after there will be recognition of the need for decisions close to the edge and intelligent actions to deal with the changing conditions. Eventually, AI-driven intelligent bots or agents will be brokering response patterns at the edge. Examples of success today would include Smart Cities infrastructure. Digital twins will flourish in intelligent infrastructure, leveraging clever hybrid and multi-cloud along with smart data meshes. All of this will require smart security that is blockchain-enabled. In the future, quantum computing exploration will keep a watchful eye on the swarms of agile AI bots responding to infrastructure and business needs.

Living with Governed Leverage with Sustainability

Like it or not, organizations will have to balance their business results with the trail of impact their business activities create. There will be the emergence of renewable energy integration where it makes sense. Recycling or recreation will be more emphasized in 2024, along with eco-friendly packaging solutions. Smart buildings that leverage AI for energy efficiency optimize energy consumption in many aspects of an organization's activities. Remote work will play a role in the delicate balance of progress and preservation. Technology will be essential in an organization's ability to measure, monitor, and reduce its carbon footprint over its complete operation as and its supply chain. Water management is becoming a vital resource to monitor and optimize, leveraging tech and advanced waste management technology and techniques.

Monday, November 6, 2023

AI Tributaries & Types for 2024

While it is imperative to understand what AI is, where it is going, and where it offers promise and downsides, it is also essential to know all the technology tributaries. These tributaries offer strengths that can contribute to business outcomes, but they also have challenges in implementation and operation. I gathered the most common AI technologies, depicted in Figure 1, and briefly described where to use them and where to avoid or bolster use. Often, organizations combine several of these tributaries to accomplish their desired outcomes and keep them current in a more automatic way. Keep in mind these tributaries are maturing fast and independently today, so organizations will have to package a number of these to reach desired outcomes that are of a higher order. I am hoping this enumeration will assist in spending your 2024 AI budget. 



                                                           Figure 1 AI Tributaries

Logical

Machine Learning

Definition

Machine learning is the kind of AI that teaches computers to learn from experiences represented by data and information that does not rely on a predetermined equation or sets of rules. Machine algorithms adaptively improve their performance as the number of data samples increases, thus increasing the learning process.

When to Use

Use machine learning when you can't code rules, such as human tasks involving recognition where there are many variables with frequent change.

When Not to Use

When the data is problematic, including too much noise, too dirty, or grossly incomplete.

Deep Learning

Definition

Deep learning is a distinct/specialized form of machine learning that attempts to learn like humans by identifying objects and linking them to each other using a neural network, which is layered with interconnected nodes called neurons that work together to process and learn from data. It's a form of patterned learning.

When to Use

Use learning is used where there is a large amount of data available and there is a requirement for higher accuracy. Typically, deep learning learns from its mistakes and includes the lessons learned.

When Not to Use

Deep learning has a high computational cost that must be factored into solutions. There is, of course, a high dependence on the data quality. The scope of the data it is trained on may limit its ability to deal with unforeseen consequences.

Pattern Recognition/Perception

Definition

Pattern recognition is the automated recognition and regularities in data of various sorts. These patterns can be classified and leveraged to make decisions or predictions. New and emergent patterns can be detected for further analysis.

When to Use

Pattern recognition is critical in improving comprehension of the intricacies of complex problems. It is beneficial for recognizing objects in images, scanning, and photo-related interpretations.

When Not to Use

Again, the state of the data is critical, but dealing with significant variations in the data may disqualify pattern recognition as a solution.

Natural Language Processing (NLP)

Definition

NLP is a form of AI that allows computers to understand human language in any form and leverage it in a more seamless human-computer experience.

When to Use

NLP is a significant bridging mechanism between humans in their own language and computers. It is often used for computers to read text or hear speech to interpret and measure sentiment, helping to identify important words/phrases.

When Not to Use

NLP is not as helpful when a particular language is inconsistent or ambiguous, particularly regarding sarcasm and culture.

Real-time Universal Translation

Definition

Real-time Translation helps people translate one language to another instantly. People speaking differently can have a conversation or meeting in different languages with minimal delays or issues with accuracy.

When to Use

Universal translation is an essential tool for breaking down language barriers and facilitating cross-cultural communication.

When Not to Use

UT cannot correctly translate expressions, idioms, slang, abbreviations, or acronyms. Additionally, it cannot provide an accurate yet creative translation. Therefore, it should be used with caution.

Chatbots

Definition

A chatbot is a software application or web interface that aims to mimic human conversation through text or voice interactions. Chatbots that represent real-world interactions and incremental learning are the most effective.

When to Use

Chatbots are used in timely, always-on assistance for customers or employees. Often, they are helpful in social media, messaging, and phone calls.

When Not to Use

Chatbots are not helpful when addressing customer grievances as every individual is unique, and the problem could be complex over a more extended period than any one business event or transaction.

Real-time Emotion Analytics (EA)

Definition

Emotion analytics collects data and analyzes how a person communicates verbally and nonverbally to understand a person’s mood or attitude in the context of an interaction. EA provides insights into how a customer perceives a product or service.

When to Use

EA can help you improve the usability, engagement, and satisfaction of your users, as well as identify and address any pain points or frustrations.

When Not to Use


Like other forms of technology, emotional AI can display biases and inaccuracies. Consumers have to consent to being analyzed by emotional AI, which may present some privacy concerns.

Virtual Companions

Definition

A virtual companion is an embodied AI character that advances multiple forms of companionship. It includes not only the experience of togetherness with an AI character but can also augment the nurturing of companionship between people or animals.

When to Use

These interactive programs are accessible through the web or mobile, that serves as a companion or partner for therapy and mentorship. Early uses are about a boyfriend or girlfriend relationship, fulfilling some of the functions usually associated with these relationships, but also used for elderly care—emerging benefits around mentorship and collaboration in business.

When Not to Use

Be careful, as they can cause harm, such as hurting users emotionally or giving dangerous advice. Sometimes, perpetuating biases and problematic dynamics are a result of their use.

Expert Systems

Definition

Expert systems leverage AI to simulate the judgment and behavior of a human or an organization with expertise or experience in a particular field.

When to Use

Expert systems can be used standalone or to assist non-experts. It's helpful when skills are scarce locally, expensive, error-prone, and people are too slow.

When Not to Use

Expert systems do not leverage common sense and often lack creative or sensitive responses that humans can deliver. Often, expert systems lack explainability.

Generative AI

Definition

Generative AI refers to models or algorithms that create brand-new output, such as text, photos, videos, code, data, or 3D renderings, from the vast amounts of data they are trained on. The models 'generate' new content by referring to the data they have been trained on, making new predictions and output.

When to Use

Generative AI creates new and often original content, responses, designs, and synthetic data. It’s valuable in creative fields and novel problem-solving while generating new types of outputs.

When Not to Use

Generative AI can provide helpful outputs based on users' queries, but sometimes, the material generated can be offensive, inappropriate, or inaccurate. Human guidance can correct the result and put it into context.

Physical

Edge AI

Definition

Edge AI is all about putting intelligence closest to any device or edge computing environment. Edge AI allows computations to be done close to where the data is collected rather than at a centralized cloud computing facility or offsite data center.

When to Use

When speedy, always-on, and decisions are necessary, close to where data is sensed and collected.

When Not to Use

Edge AI devices may not all have the same level of encryption, authentication, and protection, therefore making them more vulnerable to cyberattacks. Scalability is also a challenge.

Sensing AI

Definition

Sensing AI is an AI awareness that is driven by one or many human-replicated sensing capabilities such as voice, vision, touch, taste, or smell. Sensing AI gives a presence in one or more physical contexts to present data to the logical side of AI.

When to Use


Any time in context computing will assist; any or all of these senses will give immediate and vital feedback to computing systems and humans. These are often used in dangerous environments.

When Not to Use

When Physical senses do not contribute to desired outcomes or where immediate feedback is unnecessary.

Autonomous Robotics (AR)

Definition

ARs are autonomous intelligent machines that can perform tasks and operate in environments independently without human intervention.

When to Use

Ars are great at automating manual or repetitive activities in corporate or industrial settings, but they also are great at working in unpredictable or hazardous environments.

When Not to Use

Robots only do what they are programmed to do and can't do more than expected unless some kind of learning AI powers them.

Next-Gen Cloud Robotics

Definition

Cloud robotics is the use of cloud computing, cloud storage, and other internet technologies in the field of robotics. One of the main advantages of cloud robotics is its ability to provide vast amounts of data to robotic devices without incorporating it directly via onboard memory.

When to Use

Cloud-based robot systems are capable of collaborative tasks. For example, a series of industrial robotic devices can process a custom order, manufacture the order, and deliver it all on its own—without human operators.

When Not to Use

Tasks that involve real-time execution require on-board processing. Cloud-based applications can get slow or unavailable due to high-latency responses or network hitch.

Robotic Personal Assistants

Definition

A robot personal assistant is an artificial intelligence that assists you with routine domestic chores and improves your quality of life.

When to Use

Today, these robots are used in specialized services such as cleaning.

When Not to Use

For tasks that require empathy or dynamic adaptability,

Management & Control

Artificial General Intelligence (AGI)

Definition

AGI represents generalized human cognitive abilities on software that can solve an unfamiliar task.

When to Use

If realized, an AGI could learn to accomplish any intellectual task humans or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in most economically valuable tasks.

When Not to Use

It is not here yet.

Digital Twin

Definition

A digital twin is the digital representation of a physical object, person, or process contextualized in a digital version of its environment. Digital twin links the logical side of AI and the physical side of AI in an artificial environment to visualize, simulate, and try actions without real consequences, ultimately promoting better decisions by humans or machines.

When to Use

Digital twin technology enables you to create higher-quality products, buildings, or even entire cities. By creating a simulation of a system or a physical object, designers can test different design scenarios, identify potential design flaws, and make improvements before construction begins.

When Not to Use

It is challenging to maintain a digital asset. Many digital twin efforts fail because the digital assets don't receive the same maintenance effort as the physical ones. The digital twin requires consistent upkeep, significant observation, and time to document all real-time changes.

Smart Self-Generating/Adaptive Applications, Processes and Journeys

Definition

Self-adaptive software systems can adjust their behavior in response to their perception of the environment and the system itself. Applications, processes, and journeys coordinate competent and not-so-smart resources and must constantly be tweaked to stay current with needs.

When to Use

When a system or process supports emerging conditions and desired outcomes.

When Not to Use


When the system or process exhibits long-term stability

Goal-Driven & Constraint Behavior

Definition

When Management goals change to reflect the latest thinking or emerging governance constraints, systems and processes seek these goals within governance boundaries.

When to Use

When volatility is a crucial consideration, or there is a robust environment of emergence

When Not to Use

When stability creates a Constance.

Cognitive Cybersecurity

Definition

Cognitive security is the interception between cognitive science and artificial intelligence techniques used to protect institutions against cyberattacks.

When to Use

When bad actors generate intelligent attacks

When Not to Use

It is not optional today and is part of the intelligent infrastructure

Net; Net:

It is essential to understand all the flavors of AI so that solutions can leverage AI where it makes sense in the current and future business environments. The AI tributaries will combine into solutions that will be more business or consumer-ready. Leading organizations will not wait long to take advantage of these tributaries and emerging combinations. Even the following organizations need to understand these tributaries to ask the right questions to vendors or internal developers. AI is shape-shifting, so let's stay on top of this emerging movement.

Additional Reading:

Definition of AI

















Monday, October 23, 2023

Why Do You Need to Define AI?

Everyone is talking about AI, and nearly half of the digital/technological budgets for 2024 are aimed at AI. Because of this, everyone should care about what AI is and what kind of AI you want to meet your objectives. You need to know if something is AI or if some vendor or internal developer is just AI-washing what they have to offer. I think it is essential to understand what AI is today because AI is on a journey, and it's vital to know where it came from, what can be done with AI today, and where AI is headed. This way, organizations aren't stuck with pioneering without knowing they are on the edge or using traditional technology wrapped in an AI wrapper to leverage the AI movement. Or you are claiming AI victories by just putting window dressing on conventional technology. AI will be a critical player on your team soon, so to get to know AI early, I tried to identify authoritative definitions and put them in one place for you, including what one version of AI thinks. I also attempted to define a value-added definition of AI, leveraging my past business experience leveraging AI to get started.






General:

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

The Association for the Advancement of Artificial Intelligence (AAI): AI’s primary goal is to build an intelligent machine. The second goal is to find out about the nature of intelligence.

WIKI: Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behavior.

Gartner defines artificial intelligence (AI) as applying advanced analysis and logic-based techniques, including machine learning (ML), to interpret events, support and automate decisions, and take actions. This definition is consistent with the current and emerging state of AI technologies and capabilities, and it acknowledges that AI now generally involves probabilistic analysis (combining probability and logic to assign a value to uncertainty).

Forrester defines “generative AI” as: “A set of technologies and techniques that leverage massive corpuses of data, including large language models, to generate new content (e.g., text, video, images, audio, code). Inputs may be natural language prompts or other non-code and non-traditional inputs.”

Investopedia

The simulation of human intelligence by software-coded heuristics

ChatGPT

AI, or Artificial Intelligence, refers to the simulation of human intelligence processes by machines, particularly computer systems. These processes include learning, reasoning, problem-solving, perception, and language understanding. AI technologies aim to enable computers and machines to perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and adapting to new information.

Jim Sinur


"AI is the leverage of software and machines to add perception/intelligence to individuals, customer/constituent experiences, processes/tasks and devices to optimize balanced outcomes by interpreting patterns of interest, making highly informed decisions with speed, and taking appropriate proactive or reactive actions considering wide and deep implications all within the context of changing conditions and governance guardrails."

Net: Net:

It is essential to know what kind of AI you are buying. Machine intelligence or generative AI is often represented as the only and most advanced AI. Ensure you understand the AI you are buying or building as many technologies participate in the AI disciplines. It may mean understanding the multiple streams of AI contributing to a solution you may buy or build. It influences what problems you are trying to solve, your testing/debugging, and where it can go off the ranch and get into trouble in places you don’t anticipate. If you want a picture of where AI has come from or where it is likely to head, please click here. The Gartner AI Hype Cycle is another good resource. Better yet, you need to define what kind of AI you want to pursue in line with your business objectives and get ahead of AI’s projected core competencies and technologies for the future.

It would be best if you defined AI for you continuously as it evolves to the ideals declared in the general definitions available. AI is just a set of methods, techniques, and tools that will be used for good and bad outcomes. Remember, AI is not God, and nor should it be used by actors to create an artificial god. Humankind will eventually be supercharged with AI. As always, some will fly too close to the sun.


Additional Reading:


Wednesday, October 11, 2023

What Have Peeps Been Reading in the 3Q 2023?

 Topics of customer journeys and processes continue to gather interest, but surprisingly, the combination of IOT and process hit the top spot for interest. Of course, AI is peaking because of the momentum of generative AI. Managing new balances with collaboration, balancing new digital efforts with legacy maintenance, and creating the elusive management cockpit for management visibility also gathered interest. Sweden and Canada were the most active offshore countries. 

                                          Hot Topics for 3Q2023



                                                Third Quarter 2023 Offshore Activity  (non-US & China)           




Friday, March 31, 2023

A Ransomware Recovery Maturity Model is a Must


Ransomware is one of the biggest cyber security threats in 2023 and seriously threatens businesses of all sizes. Ransomware attacks work by infecting your network and locking down your data and computer systems until a ransom is paid to the hacker. A user or organization's critical data is encrypted, so they cannot access files, databases, or applications. A ransom is then demanded to provide access and keep data resources from downstream data sales. Ransomware is often designed to spread across a network and target database and file servers and can thus quickly paralyze an entire organization.

The overall amount of damages paid for ransomware attacks in 2021 was around $20 billion, with payouts in 2030 estimated to total approximately $231 billion. It is just the tip of the cost iceberg because all organizations will pay significant sums of money to defend in depth against Ransomware. Once struck, the time to recover using traditional methods ALWAYS requires way more time and effort than is ever considered. According to the IST Ransomware Task Force, the average downtime can be 21 days, with full recovery taking an average of 287 days from the initial ransomware incident response. The threats and costs are growing so fast that Ransomware has risen to the number three concern during this critical infrastructure attack era. Gartner says businesses are shoring up their defenses by spending another 11% more in 2023. Therefore a Ransomware Recovery Maturity Model is essential and becoming part of an overall security effort covering and recovering from threats and attacks.


 

Figure 1 Ransomware Recovery Maturity Model

The Dangers

As cybercrime escalates, the dangers and costs increase dramatically. It may not be apparent, but adversaries are stockpiling your vulnerabilities. Once made public, there can be a feeding frenzy. A growing number of threats from various sources and kinds of attacks should concern businesses. There is now a sophisticated and growing ecosystem of harmful sources, including:

· Corporate Gangs/Mafia

· Developers

· Access Brokers

· Competitive Forums

· Affiliates

· Crypto Brokers/Money Launders

· Dark Public Relations

Today Ransomware is plenty sophisticated, with not only lockdowns of data but the selling of exfiltrated credentials, data, and even direct access to data and systems. The bad actors are stealing from accounts, committing personal extortion, hacking for hire, and selling sensitive customer/lead data. They use various methods and techniques, including:

· Installing Adware

· Crypto mining

· Credential Theft

· Launching Attacks

· Sending Spam Emails

· Creating Proxy Sites

· Resource Renting

Ransomware of the future intends to maximize the haul, optimizing the revenue per event and victim by leveraging advanced automation and intelligent bots that can swarm to opportunities.

Why a Ransomware Recovery Maturity Model?

Ransomware is rising to the point of a ubiquitous threat, morphing to become more lethal by the day. A growing Ransomware Ecosystem makes the perpetrators seem like a regular organization. Bad actors release press releases to put a veneer on top of the gangs, bribers, opportunistic developers, and brokers. These bribers are out to take your money, so laying down strategies and tactics is undoubtedly worth the time and money. If they can't bribe your organization, they will sell your data for profit or even do both. They are trying to maximize their profit per victim. The above model Figure 1 lays out the progressive steps towards reactively or proactively dealing with Ransomware. The model can be used as a standard classification of ransomware protection efforts while evaluating ransomware software and service providers. The model becomes a gauge for protection levels.

It is essential to visualize the efforts that can be taken to head off the inevitable attacks or sneaky events. Ransomware is the fastest-growing vulnerability associated with cybersecurity and deserves its own set of detection techniques, proven faster reactive approaches, and proactive steps for evolving assurances. Organizations need to have a plan to deal with this growing menace. A ransomware maturity model overlaid over a well-accepted and established security model is presented here. While security gets significant attention and investment from top management in most organizations, Ransomware has not. The model phases below outline the necessary maturity steps in dealing with Ransomware.

What are the Standard Maturity Levels?

Aware

Aware is the level where management realizes that Ransomware is an issue that needs action. Security folks recognize that bad actors start small with low risk leading to acceleration and expansion. Bad actors see a compromised victim as a growing bag of money to tap and can't be trusted once the bribe is paid. Sometimes they steal data and credentials to sell later. Later they often crypto-mine and install adware. In case they use an advanced attack to steal money or leverage a campaign to phish trusted partners or customers. Education is the key to awareness even as new nasty twists emerge, but data is the essential source to attack.

Active

There needs to be a commitment to detection and recovery that protects people, processes, and data. Active action puts up some resistance and foils some simple, early attacks. It is taking a defensive reaction of informing your people and notifying constituents to watch out for phishing attacks that open holes in the security perimeter is a vital action here. It means better-communicated policies to mitigate social engineering attacks that entice people to open emails and links, allowing a gateway for further evil actions. Multi-factor authentication is a typical response. It may mean you have to teach users to spot rogue URLs.

Operational

Operational is where there is a concerted effort to put good practices into place that make it hard for ransomware perpetrators to cash into revenue streams. It means focusing on understanding the risky areas of your organization's assets. There needs to be a repeating process for classifying data and processes for the organization's risk level. Risk analysis and prioritization are vital ongoing efforts. Organizations must assume they have already been infected and look for dormant attachments to patches and other code parasites. Key data sources must be clean before backups can be trusted. It means that data changes must be tracked and analyzed. Once cleaned, some mass data restoration procedures must be in place.

Managed

Managed is where the efforts turn to early detection, focus, and isolation. Now batch detection depends on real-time. Intrusions are found early, and affected data is isolated whenever possible to prevent infection spread. Isolation allows for a more focused recovery that optimizes speed to restoration. Even if isolation is not possible, automation of the recovery process should be established. Knowing that a clean backup is available close in synch with current operations allows for automation of mass recovery minimally or focused recovery ideally. It makes data defense and protection a cornerstone of response to Ransomware.

Optimized

It is making this automation smarter and closer to self-healing, the next step in the maturity model. It is done without human intervention except for notification that it has occurred. It means that AI and analytics are used to detect cyberattacks that are in progress, respond to threats intelligently, and eventually enable bots that detect advanced malware. It now becomes "good-bots vs. bad-bots."

Net; Net:

A ransomware maturity model is necessary to determine the level of protection and understand what is being done to avoid paying the bad guys. The maturity model also is used as a guide for the protection from ransomware journey that gives directions and guideposts to show progress and feel like progress is understood in context. Ensure your ransomware technology and service providers subscribe to a maturity model to track progress for better protection. It is an escalating war that needs constant tuning. Organizations can't wait to be attacked, as a ransomware event's probability of getting hit by the day is getting higher. It's not just the crooks as we hear of wars and rumors of wars generating cyber attacks that may include payoffs. Getting ahead of these attacks is crucial by spending more time and effort upfront to defend, detect, and data-proof your organization. Hiring an experienced set of services or buying important software is wise.

Additional Resources:

CIS Controls

Blog Posts 

Sample Vendors


Tuesday, January 18, 2022

2022 Top 5 Technical Trends

It is safe to say that organizations will focus on assured success in 2022. While the allure of new digital solutions and the temptation of true transformation will still seem to call, organizations will stick with what works. It does not mean that organizations will not innovate; it means that the innovation will be undertaken with wisdom while staying congruent with the Top 5 Business Trends in 2022. (Click here for more information). Part of that wisdom will also be keeping an eye for emergent situations and technologies that could derail these focused efforts that are highly synched to the stakeholders and executive directions. Organizations will double down on successful technologies and expand their uses while keeping a watchful eye for technical innovation and experimentation.



Automation Will Pay the Bills Now


All the focus on hyper-automation is starting to pay off. Organizations are getting substantial benefits from newer automation approaches. Consequently, there will be additional bets on combinations of technologies that deliver the best returns. These returns will not only contribute to the bottom line for current earnings, but they will also help fund any new tech efforts that management deems essential to compete. The silo technologies coming together to deliver great automation include process/workflow, RPA, Process/Data mining, Business Lead, Low Code, Monitoring, Simulation, Mapping, and Analytics. There are a variety of combinations that have compelling case studies, and many organizations have had significant successes. Putting together a portfolio of initiatives that deliver savings and opportunities to support business directives is a must for 2022. Having a platform of integrated technologies is a big help in providing the benefits of technical combinations. Click here to see example combinations and vendors that have proven success as a Digital Business Platform (DBP)

AI Drives Deeper and Expands Current Roles


Ai has proven its value in learning from data, and that will continue to gather steam focused in and around desired business outcomes. When combined with analytic and statistical models, AI can move into more thinking situations on top of the already important detection and pattern recognition duties AI is known for today. The data sources for detection mining will expand beyond traditional data to include images, videos, voice, and communications. The kind of thinking situations AI can move into in the short term would consist of knowledge acquisition/leverage, modeling, projections, autonomous actions on emergent situations. Conversational and explainable AI will make substantial headway in 2022, building on existing success. A new movement in AI will revolve around intelligent chatbots, smart automation bots, and intelligent applications. As AI ethics mature, interactions with our employees and customer will become routine for AI

Employee Focused Technology Gather Steam


With the emphasis on hybrid work, organizations have changed how work is accomplished for the operational support of business activity and the projects that are meant to improve operations or tactics while staying on top of business directives. It’s safe to say that we are all finding plusses and minuses of working independently. Suffice it to say that we have to make remote work easier for the workers first because of the skills shits and the supply and demand equation in favor of employees. We have to make it easier on employees. Right now, they have multiple bosses that they are matrixed to at any given time and have to deal with many communications channels and a lot of noise communications. It's not easy to sort through the mass of collaborations because it’s not always clear what task supports what priority directive; consequently, employees are confused and frustrated. In addition, leaders do not have enough visibility on behavior, so they don't know if their assignments are progressing. All can be helped in 2022 with better or new results-oriented collaboration tools tied into real progress. Investing in employees is a crucial differentiator for 2022, and employees will smell insincere actions and vote with their feet.

Decision Focus Drives a Better Data Mesh for Key Analytics


Managers will be making more integrative decisions that will require more detailed data, often sifted by AI, and need a lateral view that looks for the implications to multiple contexts. It will drive two significant activities over and above the resurgent analytic sectors. One is integrated visibility that looks across the organization and even outside the organization. Often there will be integrated monitoring or management cockpit that will visualize results, notify managers of significant detections and allow them to try different alternatives leveraging prediction, simulation, and various analytical modeling to take appropriate and quick action. The simple decisions will eventually get automated, and autopilot actions will be suggested or enacted. The other is establishing, growing, and managing a data mesh. All of this is entirely dependent on having an intelligent data mesh that knows where the data is and the level of quality of said data regardless of data type (operational databases, behavioral data, voice, or video), no matter where it resides. This huge vacuum is being filled as we speak with emergent and new data management software that catalog and reach into various sources (cloud or not), notifying the manager of the data quality scores.
 

Most Executives Ride Maturing Technology


Innovative management will be taking advantage of new techs as it matures, such as 5G, Digital Twins/IoT, Security Mesh, NFTs, and Digital Currencies. Savvy organizations will try to pick up emergent technologies for their risk culture tolerance at the appropriate stage. If you integrate all of Gartner's hypercycles, you might get a good feel for what is mature and when things might mature for the organizations with low-risk tolerance. Some organizations proactively pick the emergent technology through self-experimentation, others wait for success stories, some wait for competitors to show usage, and others jump on technologies with momentum. Some let the CIO do the dirty work and suggest. Others push ahead without checking the latest tech as long as they don't miss a big push tech like 5G. I would offer a cooperative approach between business and IT to integrate operational and project plans with tech-savvy folks in IT and special folks in the business. Pick at least one in a new category for 2022.

Net; Net:


While growth and innovation with better connections to constituents will dominate organizations' desire in 2022, there will be a strong vein of agile practicality dominating the technical scene. While organizations need to treat customers better and measure their behavior, the employees and partners supporting the organization will also need some significant attention in 2022. This means that some automation savings will be allocated to investing in the people touching the organization inside and out. This will require focusing on the use of existing technologies and expanding to the use of new technologies. As a result, organizations will have less unfocused technology activities in 2022, but managers will be keeping their eyes open for technologies not to miss.



Wednesday, December 8, 2021

Linking Strategy to Operations

There is a constant balancing act between strategy and operations, and it gets even more challenging during periods of change or near impossible during significant chaotic events. COVID has brought this issue front and center with the increased speed of decisions and intelligent actions on several fronts. We aren’t through this threat scenario yet, but we can expect more threat and opportunity situations to emerge globally and locally. It puts a premium on superior insights, optimal decisions, and an excellent management overview. It means that the links between strategy and operations need to be optimal while allowing for emergent change. The days of the steady course are numbered at worst and only temporal at best. It means that an "Insight First" approach based on "Contextual Insights" to stakeholders is needed.


What Are the Links Between Operations Strategy and Business Strategy?

The business strategy is the overall business vision looking further ahead and anticipating the direction and business wants over a long period. The operations strategy is to provide a plan for the operational functions to make the best use of an organization's resources. Therefore, operations strategy must be aligned with its business strategy to enable the company to achieve its long-term plan. In addition, it means that its operations must be agile enough to support strategy changes while feeding monitoring information that might indicate trends that imply a change to the current strategy.

Example Checklist for Methods Linking Strategy to Operations

The recipe for linking strategy to operations has some essential ingredients that much be put together to support optimal results in several changing contexts. First, it means that connecting “what to how” is critical to be kept optimal and ready for change. Ideally, each company should have an integrated method and supporting toolset. The methodology must link the what and the how in a well-woven way across organizational stovepipes and business boundaries.

WHAT FACTORS:

Vision and Mission

Strategy / Scenarios

Critical Success Factors

Risks / Patterns / Events


HOW FACTORS:

Policy / Rules / Boundaries

Objectives / Goals

Projects / Initiatives / Milestones

Processes / Orchestrations

Organization / Partners



Example Checklist of Functionality in Tools Linking Strategy to Operations

Supporting a comprehensive methodology that links strategy and operations that delivers results should be a tool suite, generally a digital business platform that focuses on results, change opportunity, and change management. This platform/tools suite should link/integrate many functions and features that deliver business outcomes. This highly integrated platform has to work seamlessly with the method across functional silos and help focus participants on results. A management cockpit often visualizes the results that build a base for automated management functionality built on top of this integrated platform.

Strategy Planning Features

Strategy Performance Reporting

Balanced Scorecards

Operational Performance

Contextual Analytics

BI Dashboards / Fast boards

Collaboration Features Organized by Results

Real-Time Chat That is Visible to All

Audit Trails by Data Point

Action Plans Linked to Stakeholders

Integrated Risk Management

Integrated Process Management

Integrated Low Code Features

Integrated Process Mining


Net; Net:


There is a delicate balance between operations and strategy. At the same time, the obvious and traditional approach is to lay a plan and implement it optionally at the operational level. It works well in periods of stability, and the active monitoring points to more optimization. However, in a more emergent world with change, operations can point out emergent signals and patterns that may suggest a need for change. A solid methodology linked with an integrated platform will allow for both proactive and reactive strategy changes and monitor the effect of major and minor changes. Organizations should be pursuing this balance with the help of an integrated method supported by a corporate performance digital platform that embraces the management cockpit.

 
Please Help with A Survey On Management Cockpits by Clicking Here You will receive a summary of the results if you leave an email address in February. Please be patient with the initial screen and use your down arrow on the drop-down selections.

Additional Reading:

Frictionless Management

Real-Time Fast Boards

Management Cockpits

Real-Time Strategy

Management by Wire

AutoPilot Management


Thursday, June 3, 2021

What’s Driving Data-Intensive Applications?

Today and in the foreseeable future, huge waves of data-intensive applications are breaking over us, with more waves to come.

It’s not just the data volume, often referred to as "Big Data" or "Monster Data," which pushes opportunities in the direction of organizations. It’s also the demand-pull of applications, processes, and journeys growing in importance for organizations to compete. These data sources are often measured in terabytes or petabytes, but “being large” is just the obvious, in-your-face description of what comprises a data-intensive application. In these apps, the data is commonly persisted in several formats and distributed in many locations and must be cared for in various ways for organizations to flourish. Coping mechanisms will be described in future posts, but this post identifies the drivers of data-intensive applications.


The Demand-Pull Drivers are Data Hungry

Because of the pressures on organizations to expand their views on the scope and impact of applications, there is a considerable demand for more data as focused and simplistic applications transition to intelligent large-span applications. In addition, the speed to detect emergent signals, events, and patterns is ever-increasing, putting pressure on follow-up decisions and appropriate actions. It’s much like a fighter plane that has to make decisions and take steps in seconds; however, management is used to working in days, weeks, or months.

Moving from Dashboards to Fast Boards

Today's organizations need to anticipate critical patterns to intercept opportunities and threats at more incredible speeds to make decisions and take appropriate actions. Some organizations are crafting technical sentinels that sit on the edge to sense and sometimes respond if given the freedom to do so.

Excellence with Management Cockpits

The idea of people watching many individual dashboards/fast boards and integrating their contexts with speed is somewhat an unrealistic expectation. Minimally, these need to be brought together into a management cockpit to grok the intersections of the visible measures. These measures range from KPIs to out-of-tolerance situations. Eventually, the management cockpit would be assisted by bots/agents to notify management of threats and opportunities. Ideally, these management cockpits could help in a "fly-by-wire" fashion within practiced business scenarios. 

Decision Management and Assistance

Besides the speed to sense, decide and respond, data has to be available to venture into new contexts to aid the decision-making process and play out the ramifications of any action about to be taken. Operational adjustments may require simple tuning or kick-off other individual efforts. Tactical moves require new versions of rules and critical adjustments of guardrails and constraints resulting from decisions. Strategic moves require some forms of advanced analytics and potentially gaming alternatives through a management cockpit. 

Value Chain & Supply Chain Extensions

Today, an awakening occurs that requires knocking down organizational/skill walls to eliminate silo thinking and actions. There is a race to kill silos in value-chain and supply-chain situations encouraged by businesses partnering to produce products or services. There is a premium on innovative collaboration that crosses all kinds of boundaries to create overarching goals and results while satisfying individual organization units at the same time. The goals, rules, policies, and constraints need to be tweaked simultaneously during operations in the middle of changing conditions. 

Supporting  Journeys

There is a considerable push to define constituent journeys, especially customer journeys, that are often integrated with employee/support journeys. Journeys require an outside-in perspective requiring more data to represent specific goals of the personas and individuals interacting with an organization. The customer experience is tracked, measured, and recorded with sentiment data, often represented by voice interactions when live via a representative or chatbots. The data around loyalty and satisfaction proliferate with an outside-in journey perspective.

 

The Data Push Drivers Overflow

Data offers opportunities in its new forms, amounts, locations, and captured contexts. Until now, the "Big Data'' headline has been driven mostly by the volume story. That is about to change with the new data types and formats that are entering the organization. In addition, there is a new generation of the distributed types and a movement that says that views can be constructed no matter where the data resides at the moment. While location complicates data quality and compatibility issues, there are alternative ways to cope with new tolerances for perfection depending on the usage described in the demand-pull section above.

 

Voice Data

Voice is a key new tributary to tap for organizations, thus tempting leverage for competitive advantage, particularly in servicing processes/applications. Voice can be leveraged to see how often competitors are mentioned in calls. Voice can also be analyzed for emotional reactions in the context of the servicing experience. It is often helpful in unscripted situations, which often occur with service representatives who are skill specialized. Now it’s not just NPS scores that count for customer satisfaction measurements.

Image Data

Images can be helpful when brought up in context. Not only can physical plant layouts and machinery be checked for safety purposes through image analysis, but broken machinery can also be detected before a significant or cascading problem occurs. Not only can out-of-bounds situations be seen, but optimal real-time planning can be enhanced by image detection. Real-time image help is a must for some jobs.

Video Data

Videos can be leveraged for better productivity, such as optimizing worker movement for better quality and faster processing. Video can be used to identify resources in action, such as people and machinery, for various kinds of operational optimization and training opportunities. Imagine showing an inexperienced worker a video on how to service a particular component such as a pump after taking a video of it in operation provides self-paced, on-demand training - without asking experienced workers for help.

Edge Computing Data

Data can be detected at the edge before it hits mainstream processes/applications. An unexpected event can trigger the notification of an emergent set of conditions or patterns for real-time decisions or actions in most cases. With the proper freedom levels, goals, and guardrails, this reporting creates a bevy of data for each node at the edge, whether a sentinel or an actor.

Distributed Meta-data

The data about the data is often called meta-data. With the sophistication of data storage and state combinations, meta-data is important to properly manage the distributed data sources. In addition to the physical state of data, such as its location (on-premise or cloud storage), its state of meaning, transformation, source, and context can be managed with meta-data.

Net; Net:

There are so many trends contributing to data-intensive applications that there will likely be another new one later today. New ways to manage traditional and new coping mechanisms will emerge over the next few years. I'm betting that both the demand-pull and the data push drivers will only accelerate. We are in the middle of a massive revolution for managing data differently. Be aware and get ready - data-intensive applications are coming for your organization.



Tuesday, May 11, 2021

Speed, Scale & Agility Delivered with Distributed Joins

Organizations are driving towards faster decisions and actions across more comprehensive ranging data sources than ever. Broader scope means multiple data sites because of business drivers alone. The distributed join is a query operator that combines two relations stored at different locations. Because the cloud-based distributed database creates many more data storage sites, the trend towards distributed joins is strong. The implication is there will be many more distributed joins in your future. This situation puts a premium on handling larger/broader scales of data and dynamic join capabilities. 


Why the Move to Distributed Databases?

We all know that distributed databases allow local users or bots to manage and access the data in the local databases while providing global data management that provides global users with a global view of the data. Because distributed databases store data across multiple computers, distributed databases may improve performance at end-user worksites by allowing transactions to be processed on many machines instead of limited to one. Increased foresight with tuned distributed databases can be used for business transactions plus analytical-driven business strategy and tactics. The drive to the cloud leveraging incremental relocation and more operations occurring at the edge with intelligent automation all feed the distributed database trend.

Advantages of Distributed Databases

Distributed databases provide some real benefits in the agile world and fall typically into these four categories:

·        Better Transparency: Users have the freedom from the operational details of the network, the replication (multiple copies of the data), or fragmentation issues in the data.

·        Increased Reliability/Availability: Because data can be distributed over many sites, one site can fail, and the data usage can continue.

·        Easier Expansion: The expansion of the system in adding more data sources, increasing data size, or adding more processors is much easier.

·        Improved Performance: A distributed DBMS can achieve interquery and intraquery parallelism by executing multiple queries at different sites by breaking a query into several subqueries that run in parallel.

Distributed Joins 

To make distributed joins scalable for high throughput workloads, it’s best to avoid data movement as much as possible. Some options for doing this are:

·        Make small and rarely updated tables that you regularly join against into reference tables, thus avoiding broadcasting these small tables around.

·        Try to choose shared key columns that are commonly joined upon regularly. This approach will promote using local joins to minimize data movement and promote parallel joins.

·        Try to restrict the number of rows in joins that cause any of the joined tables to reshuffle.

Net; Net:

Most users of SQL databases have a good understanding of the join algorithms in a single process server environment. They understand the trade-offs and uses for nested loop joins, and hash joins. Distributed join algorithms tend not to be understood and require a much different set of trade-offs to account for table data spread amongst a cluster of machines. The data movement trade-offs are key here, so designing them into the user views and the joins they imply is crucial. It was once thought that you could not cost-effectively scale distributed relational databases. Or, in other words, have a scale-out relational database. This is now possible and this type of modern database is table stakes. Modern databases are distributed-native and also combine NoSQL and SQL data access patterns, thus reducing the need for special-purpose datastores.