mlstack

mlstack (80)

mlstack

Feedback Systems

Feedback systems use sensory information to regulate behavior in real time, constantly comparing what is actually happening with what was expected and making corrections as needed. They contrast with purely feedforward (predictive) approaches, which send commands based only on a plan without checking the outcome.

Continue reading...
mlstack

Closed Loop

Closed-loop control continuously compares actual performance against desired goals and adjusts actions based on real-time feedback. It constantly monitors what is happening in the physical world and makes corrections as needed. This contrasts with open-loop control, which follows a pre-planned sequence of actions without checking…

Continue reading...
mlstack

Audio Localization

Audio and sound localization allow embodied agents to detect, identify, and locate sound sources in 3D space. While vision is limited to what the robot can see, audio provides awareness of events happening outside the visual field — behind the robot, around corners, or in…

Continue reading...
mlstack

Touch Haptics

Touch and haptics provide rich information about contact forces, textures, slippage, and object properties through direct physical interaction. While vision gives a broad overview of the scene, touch reveals details that cameras often miss — how heavy something feels, whether it is slippery, soft, or…

Continue reading...
mlstack

Vision Action

Vision for action focuses on using visual information specifically to guide movement and physical interaction, rather than for passive object recognition or scene description. It shifts the emphasis from “what is this?” to “what can I do with this?” — helping the robot decide how…

Continue reading...