We are now seeing trucks that drive themselves with human overrides. We are also seeing portable ATMs that drive themselves to you for money based on a mobile summons. Processes will be taking on the same characteristics with the ability to self compile in real time, by setting goals, constraints etc in real time. I call this "A Just in Time Swarm". Process managers, builders and participants will have agents that the want or don't want involved in "the Swarm" How will those people determine proper swarming. I would like to to see a rating system eventually, but until them how about understanding where system & people components play the best? In order for the best swarm, the agents need to seek the goals mentioned in my last post and that implies a certain level of intelligence and ability to collaborate while seeking goals.
I would like to suggest the following set of scales depicted in the figure below:
Ability to Seek at Set of Changing Goals without Collisions:
If an agent has no ability to respond to goals set outside of itself (explicit goals), it probably does not play well in the swarm world. If it can seek goals set outside of itself and the adjust to dynamic changes to the goals, it has a play here. It also must seek goals without causing collisions with other agents, physical in representation (Robots for instance) or other software agents. This scale is represented with the gold arrow (vector).
Ability to Collaborate with Other Swarm Participants:
If an agent can cooperate with other agents (people or things), it is ideal for a swarming situation. This means that an agent can cooperate while it seeks goals. This is especially important if an agent has a specialty that it needs to complete and communicate status on as the swarms makes progress on the goals. This scale is represented in green.
Ability to Have Enough Intelligence to Participate:
If an agent can complete a simple task, it sure can be leveraged as a black box. However, if it has the ability to induce or deduce based on knowledge, information, conditions or patterns, the value of the agent to "the swarm" goes up. Knowing the relative intelligence of the agent is crucial. Can it think? If it is a human, but is not trained, this may create a dilemma where a intelligent agent in the cloud can step in (a cognitive agent AKA COG). This scale is represented in blue.
Ability to Interact with Humans in Proper Cultural Context:
Certain agents were designed to swarm with other agents that are not human. Humans require more sophisticated interfaces that are sensitive who the person is and where they are and what conditions they face at the moment. How much humanity is necessary to complete the swarm actions is crucial to the desired results. This scale is represented in red.
As we move from non-differentiating components necessary for standardized processing to self directed and swarming processes that leverage several kinds of agents in real time our level of understanding will have to expand to leverage and build them.
Additional Reading on Swarming Processes: