AI robotics originated within the Symbolic AI paradigm, where robots were treated as reasoning systems that used logical representations and search-based planning to generate sequences of actions in known environments. This approach, dominant from the 1960s through the 1970s, aimed to replicate high-level cognitive functions but faced fundamental challenges in dealing with the noise, uncertainty, and real-time demands of physical worlds. Its limitations in robustness and adaptability spurred a critical reevaluation of the role of central models and deliberation in robotic intelligence.
The Behavior-Based Robotics paradigm emerged in the 1980s as a direct reaction, advocating for agents composed of simple, reactive behaviors without centralized world models. Inspired by ethology and embodied cognition, this school, exemplified by the subsumption architecture, prioritized interaction with the environment through parallel layers of competence. It established a durable rival agenda to Symbolic AI, emphasizing real-time performance, robustness, and bottom-up intelligence, and it profoundly influenced mobile and situated robot design.
By the 1990s, the Probabilistic Robotics paradigm gained centrality by systematically addressing uncertainty through Bayesian estimation and stochastic models. This framework integrated techniques like Kalman and particle filtering for state estimation, enabling reliable simultaneous localization and mapping (SLAM) and control under noise. It represented a shift toward formal uncertainty management, marrying probability theory with robotic perception and decision-making, and became a cornerstone for autonomous navigation in unstructured settings.
The rise of machine learning catalyzed the Learning-Based Robotics paradigm, which encompasses Connectionist, Reinforcement Learning, and Evolutionary Computation approaches to acquire robot skills from data or experience. Initially focused on optimization and adaptation, this agenda expanded dramatically with Deep Learning, which revolutionized perceptual processing and policy learning through neural networks. This paradigm shifted the field from pre-programmed behaviors to data-driven competency acquisition, emphasizing scalability and generalization across tasks.
Contemporary developments increasingly explore integrated paradigms such as Neuro-Symbolic Robotics, which seek to combine the representational and reasoning strengths of Symbolic AI with the learning and perceptual capabilities of neural networks. These hybrid approaches aim to address complex reasoning, explainability, and generalization in robotics, reflecting an ongoing synthesis of historical paradigms. The field continues to evolve through the interplay of these durable schools, balancing robustness, uncertainty handling, and adaptive intelligence.