<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Computer Vision on Fondsites</title><link>https://fondsites.com/tags/computer-vision/</link><description>Recent content in Computer Vision on Fondsites</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 11 May 2026 11:34:07 +0300</lastBuildDate><atom:link href="https://fondsites.com/tags/computer-vision/feed.xml" rel="self" type="application/rss+xml"/><item><title>Robot Perception: Sensors, Scenes, and Uncertainty</title><link>https://fondsites.com/physical-ai-lab/guidebooks/robot-perception/</link><pubDate>Mon, 11 May 2026 02:35:00 +0300</pubDate><guid>https://fondsites.com/physical-ai-lab/guidebooks/robot-perception/</guid><description>&lt;p&gt;&lt;img
 src="https://fondsites.com/physical-ai-lab/images/guidebooks/robot-perception-sensor-bench.avif"
 alt="A robot perception lab bench with a mobile base, robot arm, cameras, lidar, calibration boards, lights, and test objects"
 loading="lazy"
 decoding="async"&gt;
&lt;/p&gt;
&lt;p&gt;Robot perception begins with a humbling fact: the robot does not see the room. It receives measurements.&lt;/p&gt;
&lt;p&gt;A camera gives pixels. A depth camera estimates distance. Lidar returns points. Wheel encoders report rotation. An inertial sensor reports acceleration and angular motion. Force sensors notice contact after the world has already pushed back. None of these signals is the room itself. Each is a partial, noisy, delayed, calibrated, failure-prone slice of the room.&lt;/p&gt;</description></item></channel></rss>