How to Build a (Practical and Safe) Robot

By Clara Vu, co-founder and Vice President Engineering, Veo Robotics

Most people’s ideas about robotics come from science fiction, but robots—or at least automated machines—are already prevalent in our day-to-day lives, they just don’t look like R2D2 or the Terminator.

In sci-fi, robots are essentially mechanical people—they can do the same things people do, and more. But in real life, robots and people have very different capabilities, strengths, and weaknesses. For example, robots can be much stronger than humans, but they can’t match a human’s dexterity or flexibility. In order to build a real-life robot that is actually helpful for a given task, you need to have a really good understanding of what it can and cannot do. What’s more, you need to understand what the people who will be using the robot actually want and don’t want it to do.  

In my experience, the best application for robotics is physical labor. In the consumer world, most tasks that require a lot of manual labor like washing dishes or clothes have already been automated, which means the most valuable deployments for robots today are in commercial applications in industries like agriculture, manufacturing, maritime, and logistics. The people who work in these industries don’t necessarily need a robot that can do everything and anything—they need solutions to specific problems.

Figuring out where robotic applications can add value requires spending a lot of time talking with the people who will be using and interacting with the technology. The process might sound pretty straightforward, but it’s not. It takes a lot of time, iteration, and discovery.

Before co-founding Veo, I worked in robotics for almost 20 years and I learned many of the foundational skills and techniques that made building Veo’s tech possible.

I started my career at iRobot, where I learned just how difficult it could be to get a robot to complete what seemed like a simple task: driving around a living room. In order to actually be useful to people, the Roomba couldn’t just follow a pre-planned path, it had to interact with the real world, in real time. So while it was a pretty simple robot with only two bump sensors, it required complex software to deal with complex, changing environments.

In helping to develop the deceptively simple Roomba, I learned a lot of valuable development strategies. In order to build a system that is capable of interacting with the analog world, you must: 1) build the simplest thing that could work; 2) test it in the actual environment in which it will be used; 3) record, monitor, and understand what the system is doing; and 4) iterate.

harvest_automation_team.png

After leaving iRobot, I co-founded Harvest Automation, where we built a robotic system for agriculture. We researched and talked to many growers in the industry to figure out what the robot needed to do and, in response, we built an autonomous system that worked collaboratively with human workers. Specifically, the robot used 2D lidar to know where and how to move plants in commercial nurseries and greenhouses while supervised by a human operator who could intervene when it encountered a condition it couldn’t handle.

But we’d picked an unfortunate time to launch a company—right in the middle of the 2008 financial crisis. So while we continued to try to raise a round that year, we all took consulting gigs to pay our babysitters. I got in touch with Rethink Robotics where Patrick (Veo’s CEO and co-founder) was president at the time. We hit it off right away because we shared a fundamental approach to product development: starting with building a deep understanding of the customer’s needs. After Harvest successfully raised a Series A and I left Rethink, Patrick and I continued to keep in touch.

By 2015 the Harvest team had deployed our system at farms all over the country (it’s still the only broadly deployed commercial system I’m aware of that does both fully autonomous manipulation and mobility) and I felt it was time to tackle something new. Since it had become a habit to connect with Patrick every so often for career advice, I reached out to see what he was up to. He’d already had the idea for Veo back in the Rethink days, but the time was now right to make it reality.

The application he proposed was in manufacturing, an industry with huge opportunity for robot solutions. Because I’d experienced the difficulties of working in the relatively small market of nurseries and greenhouses, the massive potential of the manufacturing market was exciting to me—in 2017, the global manufacturing market was valued at $628.5 billion and manufacturing accounted for 11.6 percent of the U.S. economy’s GDP.

The more I thought about it, the more I realized that my specialty was building companies and products from scratch.

I was looking for a zero-to-one project and I needed to believe in the thing I was building, as well as the people I was working with. Scott, our third Veo co-founder, rounded out the team with his deep experience developing custom sensing and signal processing solutions, and I decided that working with him and Patrick on Veo was the exact opportunity I’d been searching for.

from-left-to-right-scott-denenberg-sr-director-hardware-and-co-founder-veo_750xx4608-2592-0-0.jpg

So we founded Veo and started talking with customers in the manufacturing industry. And through those conversations, I found a real sense of solidarity. Like me, the manufacturing engineers we spoke with had spent their careers trying to build useful robotic systems. Though we had very different backgrounds, we all understood robots on a level others didn’t, and we were all committed to making them work better.

Patrick, Scott, and I compiled our potential customers’ needs and reached out to the Robotics Industry Association to get a feel for what other technologies and solutions were already available. And we heard again and again from industry stakeholders at the RIA and at factories that they were still searching for a way to make high-performance industrial robots safe and collaborative in manufacturing settings.

The problem we needed to address was validated, so our next step was to identify the right solution. We knew that, in order for a robot to operate safely around people, it would need to be able to sense where people were. From our conversations with potential customers, we also knew that we wanted our system to integrate seamlessly with what manufacturing engineers were already doing, so that they could program the robots the way they always had and layer our solution on top without it interfering with their other systems.

Similar to the work I’d done with the Roomba and Harvest robots, we were essentially trying to close the loop with the analog world by allowing a robot to sense and then react to its surroundings. This meant we couldn’t just develop our algorithms against a dataset because the input to the robot’s sensors was entirely dependent on how the robot had responded to the previous input. And, because we were developing a solution to allow for human-robot collaboration in manufacturing settings, where industrial robots are tasked with moving parts that could weigh several tons, the stakes were very high.

So I returned to the technique I’d learned back in the day at iRobot: build the simplest thing that could work, see how it fails, and then start iterating and improving it incrementally. We started out by building a simulated model, since it would have been impossible to fit an industrial robot in the Siemens conference room we were initially working out of. Then, using off-the-shelf sensors, a gaming PC, and a Kuka robot, we built a prototype system that classified objects in a scene and controlled speed proportional to the distance between the robot and any potential humans.

By iterating on this prototype we began to design a system that could reliably interact with humans in a safety-critical environment by adjusting its speed.

Today, we are perfecting the Veo system, which combines data from our custom 3D time-of-flight cameras with our computer vision algorithms to determine a semantic representation of its surroundings.

rebecca gatto-VEO-2018-4748.jpg

To ensure the fail-safe implementation of our design, we’re building custom hardware and developing software from the ground up. By analyzing the data collected from our cameras, the Veo system determines all possible future states of a robot’s environment (i.e., where people and parts are and how, when, and where else they could go). It anticipates possible undetected items and monitors for potentially unsafe conditions. Then, it communicates with the robot’s control system to slow or stop the robot before it comes in contact with any person or unrecognized object. The system executes this process in a constant loop that occurs 30 times every second.

Everything about our system is unprecedented. It’s designed to function seamlessly with any modern industrial robot and with minimal interference to engineers’ other programs. Our bespoke architecture holds the system to performance requirements and we continue to run trials with our customers and partners to validate that our technology is a safe and efficient tool for human-robot collaboration in factory environments.

Though the idea itself is simple, making robots safe in collaborative manufacturing settings is incredibly complicated. I’m proud of what we’ve built, from the hardware, to the software, to the relationships with manufacturing engineers and industrial robotic systems providers. I’ve spent my entire career building robots, and the Veo system is a culmination of many of the best practices I’ve learned. And, more than that, it’s a manifestation of all of the experience and expertise our team has accumulated over our collective robotics careers.

For more information about our work and to see our system in action, check out this video.

If you’re interested in joining our team of rigorous and creative engineers, take a look at our jobs page.

LeadershipGuest User