One morning in early 2018, I [Brian Ringley] searched FieldLens for Manhattan jobsites where the pipe chases — open vertical cavities for running plumbing from floor to floor — had yet to be closed in with drywall and tile. I was on WeWork’s Construction Automation team, and I wanted to understand how our local contractors installed plumbing as research for a modular bathroom product. WeWork had recently acquired FieldLens, one of the earliest jobsite management software platforms to capitalize on the use of digital tablets and cameras in the field, to better manage its vast real estate portfolio.

Our superintendents were mandated to photo-document their jobsite each day using the Ricoh Theta 360 camera’s wireless connection to FieldLens, where the photos were pinned to blueprints to provide spatial context. To my frustration, the data I expected wasn’t available. Daily captures were either incomplete or out of date. I scheduled a set of interviews with some local superintendents to understand why this was happening, and a picture formed of busy superintendents working long days and eager to go home to their families, only to be stuck on site struggling with wireless connectivity and fussy technology. It was no wonder the data was inconsistent, or lacking altogether: Capturing it was difficult and, frankly, dull. Jobsite documentation, important to management, was the lowest priority of a super’s day filled with urgent construction site issues and demanding deadlines.

Over the course of that year, I evaluated market solutions for automating jobsite reality capture. From a homegrown project combining an MIT open-source RC car robotics kit with a mounted camera, to meeting with a Swiss company famous for providing micro-drone swarms as “backup dancers” for Drake’s concert tour, nothing was capable of reliably navigating the chaotic environment of a construction site, let alone doing it autonomously. That is, until Boston Dynamics released a video that Fall showing its legged robots deftly navigating a series of sites in Japan.

I’ve worked at Boston Dynamics since 2019. Since the day I saw that video, and reinforced by everything I’ve seen since, I’m convinced that this new class of dynamic sensing with agile, mobile robots is the future of project delivery, aligning real-time site progress with design intent in a manner not previously possible. But I have also directly experienced the unique challenges of implementing this new technology in construction environments, first as a customer, then as a solutions provider.

In construction environments, a map can be invalidated before it has finished being recorded. The structure and building itself is an ephemeral work-in-progress. And with extremely tight margins, automation is at a premium. Requiring any labor to record environments, let alone the need to do it routinely, erodes ROI. Captured data is made valuable by processing it in order to report on progress, safety, and deviation-from-BIM (to name a few), but the tools that can reliably convert unstructured reality capture data — point clouds from 3D scanners or imagery from handheld 360 cameras — lack product maturity, are scattered across many software platforms, and require skilled labor to use.

This leaves us with gaps in an end-to-end solution for construction site management. Reliable robotic mobility for reality capture with Spot works, and works well, but assumes a relatively static building structure. The challenge on the front-end, thus, lies in enabling robotic systems to operate in these structures without prior knowledge, adapting to changes in real time, and making decisions autonomously, even when faced with incomplete or unpredictable data. On the back-end, this requires robots that not only collect data but also understand its context, yielding actionable reporting and meaningful business insights. Enter Field AI.

Field AI

Field AI is developing a new class of robotic AI models called Field Foundation Models (FFMs). These are the world’s first autonomy systems designed for safe, resilient, and adaptable operation in Dull, Dirty, and Dangerous (DDD) environments.
FFMs directly address product gaps in construction site management by delivering three key capabilities: Firstly, FFMs enable robots to generalize across environments without the requirement to have previously mapped or seen the environment, allowing them to adapt to new and unknown situations. Secondly, they handle dynamic, ever-changing environments with ease, ensuring consistent performance in challenging conditions. Finally, FFMs leverage the data captured by these robots to deliver an end-to-end workflow, providing meaningful insights to project teams.

The first time I [Stuart Maggs] saw this in action was on a large-scale data center project—a labyrinth of complex MEP. For David, a superintendent on the project, manual data capture consumed a full workday every week.

Versatile (multi-use cases)

Deploying a fleet of robots to work alongside the project team, 360 jobwalks and laser scans were captured daily, both during working hours and at night, expanding the available productive time. When project teams logged in each morning, the data was already uploaded to their CDE and reality capture tools. This shift freed up David’s time for problem-solving, allowing him to focus on what truly mattered: driving the project forward.

Massive ROI

On another project, Field AI focused on the challenge of laser scan data capture, a traditionally labor-intensive task. With our in-house scan-versus-BIM analysis tool, NASKA, Field AI automatically analyzed over 50,000 installations, identifying 14 critical clashes in a single month. This fully automated process saved over 500 hours of manual work, reducing risk and helping keep the project on track.

Quality-aware Autonomy

These two projects highlight a crucial lesson: Field AI’s focus on DDD domains, has led to new opportunities. In these settings, how data is captured is inseparable from how it is used. Whether it’s 360-degree images to document key milestones or point cloud data to assess construction quality, the intent behind data capture shapes the insights it can deliver. Field AI’s robots, powered by FFMs, ensure that every piece of data is collected with purpose.

Tokyo learnings deployed in San Diego

Robots are continuously learning as the project evolves, to understand the layout, rules, and nuances of the site. This capability uniquely positions Field AI’s robots to achieve resilience in these types of dynamic  environments, where constant surprises are the norm. So lessons learnt in Tokyo are leveraged the next day in San Diego to improve performance. The more robots that are deployed, the more the learning is accelerated to deliver behaviors tailored to a specific contractor.

No reliance on map, No pre-defined routes, No GPS

We are at an inflection point, delivering an end-to-end solution with the intelligence to fetch data from job sites without the requirement of pre-determined routes. No other company has achieved Field AI’s level of autonomy operating in these environments, empowering robots to navigate with intent to seek out relevant physical information in order to bring it into the digital world, where it can be turned into actionable information used to make high quality decisions that fundamentally change how projects are being delivered and operated.

Related Awards

No items found.

Related Presentations

No items found.