“All our knowledge has its origins in our perceptions.”
— Leonardo da Vinci
Cerebral Process
The conceptual models derived from Merger-1 are implemented in the first cognitive simulation experiment as thought-simulation 1 (see below).
Figure X illustrates the parameters of the model environment and agents. The environment (Є) contains three elements (E1, E2, E3). Under each of the three defined conditions in Є, each element has access to four distinct stratagems, yielding a total strategy space of 36 stratagems for the miniature swarm.
Figure Y depicts the functions of the Information Retrieval Network (I.R.N.) within the swarm. The I.R.N. enables stigmergic communication between agents, provides contextual framing of the globally-perceived situation, represents collective memory of collaborative behaviors, and identifies coherent purposes for the swarm elements.
Figure Z shows the internal cognitive processes and highlights the I.R.N.’s role in formulating stratagems or tactics. It expands upon Figure Y to link the I.R.N. to the broader emergent phenomenon, connecting the deterministic assertions to the underlying cerebral mechanisms.
Key points restated from merger-1:
- Individual element performance mediated through the I.R.N. corresponds to memory limited to external stimuli. Self-enforced collaboration mediated through the I.R.N. corresponds to memory encompassing both external stimuli and inputs from other cognitive elements.
- Macroscopic View: Swarm elements are homogeneous and have predefined stratagems for conditions 1, 2 and 3. Elements coordinate strategies by combining predefined stratagems, though the communication method is unspecified. Resulting strategies enable global emergent phenomena. All probable strategies are presumed to produce identical global emergence.
- Microscopic View: Action stratagems are generated via perception, analysis, and action. Swarm elements are modeled as cerebral units of intelligence. Each element has a frame system, difference engine, and similarity engine operating on conditions 1, 2, and 3.
- Evaluation: Stratagems (or tactics) are predefined in the macro view and formed in the micro view. If all strategies hypothetically yield the same global emergence, indirect coordination of individual stratagems must exist. The coordination mechanism enabling strategy harmony remains unclear, though it should be indirect communication if macro-level determinism holds; otherwise the models would be redundant.
In Figure A (from Thought-simulation 1), only the elements and their stratagems are shown. This representation can illustrate swarms in general, including swarms of ants, bird flocks, electron swarms, etc. Interpreting X, Y, and Z as coded subsets of A elucidates the Information Retrieval Network’s (I.R.N.) key role in enabling dynamism throughout the process.
- The I.R.N., or memory, introduces dynamism into the cognitive elements’ cerebral processes, permeating the entire process both conceptually as described and realistically as useful memory.
- Examining a continuous event or situation, a society of cognitive elements creates dynamic memory by varying frames or frame systems in collective memory.
- The I.R.N.’s functions and role are clarified through a Perception-Analysis-Action (P.A.A.) process.
In cognitive simulation experiment 2 (Thought-simulation 2 above), the barbed wire depicts the Information Retrieval Network (I.R.N.) interfacing across the entire cerebral process. The powerful role of the I.R.N. is elucidated through outlining the Perception-Analysis-Action (P.A.A.) process in stages:
- Stage 1: An element perceives condition 1, 2, or 3. The I.R.N. provides a frame system that closely resembles the perceived condition.
- Stage 0: The similarity engine in the element compares the frame system from Stage 1 to actual inputs received by the difference engine. It examines both the assignment and marker conditions of the frames, predicting the optimal frame for the given condition. The frame with ideally matching conditions is retrieved into the difference engine to furnish ideal inputs. At this stage, any error percentage deviating from actual inputs may redefine the similarity engine’s function. If the resemblance error is undesirable, the similarity engine loops back into the I.R.N. to process other relevant frame systems until a satisfactory frame with appropriate conditions is retrieved.
- Stage -1: The assignment and marker conditions of the optimal frame provide ideal inputs to the difference engine to generate a goal description countering the actual situation. Finally, reliable stratagems are spawned.
Key points restated from merger-1:
- As explained previously, the difference engine (D.E.) contains specifications for a desired situation and subagents triggered by discrepancies between the desired and actual states. These subagents act to reduce their triggering difference. The D.E.’s capabilities enable actions relative to the delta between desired and actual conditions. The resulting subagents constitute action stratagems that diminish deviations between desired and real settings. The D.E. must hold representations of the desired situation.
- Within this modeling framework, the similarity-difference engine (S.D.E.) integrates into the current architecture. The S.D.E. completes the cycle of dynamically programming decentralized knowledge into the machines. In this context, the similarity engine connects the frame system to the D.E. By selecting an appropriate frame from the available frames, the similarity engine generates a goal description for the D.E. In essence, the similarity engine helps choose a suitable frame from the frame system based on analyzing the actual situation, providing the D.E. with the closest ideal inputs as a goal description.
However, Thought simulation 2 delineates the cerebral processes underlying each individual element, but does not address collective behavioral intelligence and stigmergy. The following example illustrates both concepts:
Consider a modified model of the miniature swarm depicted in thought simulation 3 (see below), wherein:
- Chaotic environmental conditions are spatially distributed in discrete areas 1, 2, and 3.
- Two elements are confined to non-overlapping circular regions, able to rotate and observe areas 1, 2, or 3.
- Each element perceives a different spatially distributed area – the pink element perceives condition 1, while the blue element perceives condition 3.
- The elements are unaware of each other.
Now consider a scenario where both elements perceive condition 2, as in thought simulation 4 (see below):
- The elements become cognizant of one another and recognize their shared species membership, having encountered analogous situations and developed learning through similar environments. Biological swarms could not exhibit their observed behaviors without having evolved through developmental stages necessitating cooperation. Ergo, learning in swarms is likely a process coupled to memory.
- The cerebral architecture of the elements contains frameworks intended for indirect cooperation or self-imposed collaboration.
- Cooperation is self-enforced; the cognitive elements involved already comprehend that cooperation amplifies their efficiency.
The cerebral process underlying the collective behavior of a society of elements is expounded as follows. Refer to thought-simulations 5 and 6 below:
- Stage 1: Elements perceive conditions 1, 2, or 3. The information retrieval network (I.R.N) furnishes frame-systems most congruous with the conditions at this point; the cognitive elements are considered aware of one another. Thus, frame-systems supporting non-cooperative performance or self-enforced collaborative performance are also emitted here.
- Stage 0: Similarity engines within each element compare the frame-systems from Stage 1 to the actual sensory inputs received by the difference engines. They analyze both the assignment and marker stipulations of the frames, predicting the most suitable frames for any given condition. Frames with analogous stipulations are retrieved from the information retrieval network into the difference engines to provide optimal inputs. All frames or frame-systems must complement self-enforced cooperative behavior; chosen frames must be capable of simulating the situation with respect to the anticipated conduct of other elements. At this juncture, any percentage of error deviating from the actual inputs may recalibrate the function of the similarity engine. If the resemblance error is undesirable, the similarity engine cycles back to the information retrieval network to process other applicable frame-systems. This process continues until the information retrieval network furnishes frames with appropriate stipulations.
- Stage -1: The assignment and marker stipulations of the furnished frames provide ideal inputs to the difference engines for generating goal-descriptions for collective behavior. In the final phase, reliable stratagems are spawned. Default assignments which offer a technique for circumventing logic may play a significant role here. Default assignments enable the swarm to retrieve an identical pre-recorded experience from “memory”, allowing elements to foresee and anticipate their future collective behavior.
Technically, the modification of a difference engine’s goal description in a cerebral unit of intelligence is indirectly administered by the information retrieval network and its frames in the swarm’s memory, but directly retrieved by the similarity engine. Hence, in a homogeneous swarm of elements perceiving the same situation (or reacting to the same stimulus), the information retrieval network provides analogous frame-systems to all elements. This creates the illusion that stigmergy operates beyond cognitive elements, when in fact the frames enabling stigmergy are intrinsic within the elements as mutual (or collective) memory. See thought-simulation 7 below.
The cognitive architecture of cogitative elements has been modeled to incorporate stigmergy. Macroscopic and microscopic perspectives of the miniature swarm achieve a degree of synchronization stemming from a foundation in stigmergy. The analytical framework devised for cogitative elements demonstrates that decentralized decision-making at the microscopic scale can be reproduced using neural structures that leverage distributed logic.
While macroscopic interactions still lack a definitive explanation, the deterministic assumptions established for the miniature swarm remain intact for further analysis. The following blogpost introduces game theory as an instrument to explain macroscopic decentralized decision-making in swarms. Three distinct interpretations of Nash equilibrium are presented – the conventional interpretation, the mass-action interpretation (M.A.I), and the mass action of the mass action interpretation (M.M.A.I). The first two are addressed in Merger-3 and Merger-4, while M.M.A.I is modeled in Merger-5 to elucidate the Singularity.