The accuracy of mobile 3D scanning depends a lot on the user and the environment, so unfortunately an absolute tolerance, e.g., “all measurements ever will always be within X millimeters,” can’t be provided. If you’re looking to use Canvas for installation and construction projects that require extremely tight tolerances, you may want to verify the few critical measurements you need manually, and then adjust your CAD model once you receive it. We primarily recommend this option for initial trial purposes (i.e., before you commit to purchasing a LiDAR-enabled iPad or iPhone), or to work with clients remotely.
We always recommend scanning a few rooms around your own house to understand how Canvas handles different kinds of environments before you take it on a job site.
An even more common example is wall, floor, and ceiling thickness: we do not observe these directly, and have to use construction standards and the rest of the scans to determine the best value (which isn’t perfect). You may also find that smaller dimensions (such as those under one foot) may have slightly wider variances than 1-2% because even being 1/16th of an inch off may be larger than 2%, and sub-inch precision is simply going to be hard to maintain consistently through the simplification process of Scan To CAD.
Some kinds of environments can be a little harder to scan in general — such as a completely empty space with no furnishing and nothing on the wall (i.e., nothing for the sensor to track against), or a room with tons of very large mirrors — and may lead to slightly wider tolerances. We have debugged many orders where someone was certain that our measurements were inaccurate, but it turns out that someone had their laser range-finder against the wrong surface, or made a typo, or some other issue that had nothing to do with the scans.
Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences
Points within the square were counted on photos taken with a Raspberry Pi Camera Board NoIR v2.1 (8 MP), that comes without an infrared filter making the laser dots visible (Supplementary Figs. The SfM MVS point cloud was scaled utilizing the local RTK reference system of the UAV. When using the above-mentioned applications, a mesh is compiled on the go with the build-in three-axis gyroscope working as an inertial measurement unit.
Apple Inc. proprietary software platform ARKit triangulates the mesh internally based on the raw point measurements.
Scanning with the ‘3d Scanner App’ was conducted by walking along the cliff as well as up and down the beach with the hand-held iPhone or iPad, covering every angle of the object of interest. In that scanning mode, a mesh of the close surroundings (< 5 m) is generated based on the iPhone’s LiDAR sensor.
Data recoding is performed by walking along the beach close to the water line pointing the phone side wards of the moving direction towards the object of interest. The iPad and iPhone LiDAR data of Roneklint cliff obtained with the ‘3d Scanner App’ in December 2020 and September 2021, as well as the ‘EveryPoint’ point clouds were exported and loaded into CloudCompare25.
The M3C2 distance calculation is solely based on point cloud comparison and is therefore chosen over a method interpolating surfaces (e.g. Furthermore, the M3C2 approach is well adapted to calculate distances between two point clouds on a cliff, as it is suitable for vertical as well as horizontal surfaces and it gives positive and negative values of distance34.
The iPhone LiDAR scan of the entire cliff from December 2020 is compared to the reference SfM MVS cloud.
iPad pro LiDAR sensor 3D scanning accuracy compared to a Laser scanner with sub-millimeter accuracy : augmentedreality
However he is very concerned with accuracy and is asking if an app that 3D scans objects using ARKit 3.5 and the LiDAR sensor on iPad would give more or less accurate results than the laser scanner.
SiteScape: LiDAR scanning on the iPhone / iPad
In December 2020 we saw the launch of SiteScape, a free iOS app that offers 3D scanning to anyone with a LiDAR-equipped iPhone or iPad. It raises the question: why use an expensive laser scanner or handheld SLAM, or make do with pen and paper, when you can use a commodity device to capture certain aspects of a site very quickly?
Once captured, scans can be exported to the .PLY or .E57 file formats (and soon RCP) and brought into CAD, BIM, point cloud or collaboration software such as Revit, AutoCAD, Archicad, Sketchup, Navisworks, Recap, CloudCompare, Revizto and many others.
But it’s not just about the data – it’s what you do with it – and SiteScape founder Andy Putch sees a big opportunity for streamlining workflows, as he explains, “The capture side, just being able to create the content, is increasingly becoming a commodity. Users can unlock unlimited scans in the cloud, and access all past projects by upgrading to SiteScape Pro. There are best practices for scanning, such as following a set path or keeping the device steady, but as Putch explains, “You can really just kind of wave it around, and still get workable data.”
Drift can be an issue, admits Putch, but this can be minimised by performing multiple scans and stitching them together in a third-party app. To capture as-built conditions on site, the company scans one or two rooms at a time and then compiles the resulting point clouds inside Archicad to use as a visual reference.
One year later and the areas in which SiteScape is being used has expanded massively, particularly into construction with workflows like progress monitoring and as-built documentation.
At the beginning of a project, particularly for renovation or retrofit work, Putch explains that an architect just needs to take stock of what’s there and get that into their design. “Of course, if you have specific mechanical or electrical equipment that have to fit in tight spaces, then you’re going to go in and want to do an actual survey to make sure that you know you’re ordering things [correctly] and everything is going to mesh together nicely.” SiteScape is currently being used by a top 100 construction firm to help resolve issues that are encountered on site when retrofitting buildings.
“They were able to use SiteScape to capture where those tees were in the context of the rest of the room, send the scans out to each of their forming, mechanical, fire suppression subcontractors, and then everyone was able to reroute parts of their system around these obstructions within the 48 hours that work was scheduled to commence in that area.” Putch admits that when it comes to something like building code safety or LEED certification, where the stakes are really high, SiteScape is not the best solution as firms have to put in extra effort to make sure everything is verified, very precisely.
“The only real way to do this today, without any kind of scanning hardware, is for a project engineer or someone on site to come in and look at the plans on an iPad and then the actual conditions and try to tease out where things are off, if they’re off. “And then manually going and validating if there are deviations from that spec, red lining up the plan, sending them back to the architect or engineer, who then goes into Revit or AutoCAD and makes those changes, so that they have an accurate set of handover documents.” Beyond proving out and solidifying these three core use cases, the next step for SiteScape is to extend the reach of its software and improve workflows through integration with established tools like Procore, PlanGrid (part of Autodesk Construction Cloud), and Bluebeam. “What they all do quite well, is give the project team all the information they need at their fingertips to solve challenges, whether that’s design coordination or construction operations.
Point clouds and reality meshes are being used more widely in many different workflows, from scan-to-BIM and design visualisation to construction verification and as-built documentation. SiteScape can either deliver a low-cost alternative to highly accurate reality capture workflows, or augment or replace basic photography and manual measurements with 3D data that gives a more complete record and better insight into as-built conditions.
For the technology to become more widely adopted, it would certainly benefit from integrating it with in established workflows and platforms with single sign on, user roles and permissions.
The company rightly seems to be focusing on enterprise platforms like Procore and PlanGrid, but we think there’s also strong potential for integrations in more widely accessible tools like Revizto.
With SiteScape’s mobile 3D scanning, Holo-Blok was able to capture as-builts and translate record drawings more than 10x faster, saving over 100 hours in one project In the summer of 2020, as it was beginning a new project – the Dentistry Pharmacy Building Redevelopment at the University of Alberta – the Holo-Blok team decided to trial a new workflow using SiteScape to replace as-built drawings and better produce record models.
For the Dentistry Pharmacy project, Holo-Blok selected a block of 17 rooms with a similar size (approximately 225 square feet), with a combination of MEP components, layout, and complexity. According to SiteScape, with the average size of 20,000 sq ft for new commercial buildings, this could yield a savings of at least 500 hours between the different trades / consultants or ~$50k per project.
iPad Pro LiDAR Sensor Specs
I wish apple would develop or integrate into measure the ‘pointcloud’ mapping ability for creatives like me who do 3D printing, works with the Unreal engine & dabble in code.
Comparison of iPad Pro®’s LiDAR and TrueDepth Capabilities with an Industrial 3D Scanning Solution
To ensure the future competitiveness of manufacturing companies, it is necessary to face constantly changing customer requirements and market turbulence. Consequently, a shortening of the product development cycle is constantly pursued in order to adapt to the dynamic market efficiently and quickly [ 1 ]. Additive manufacturing technologies offer high potential for individualization at low cost in a short period of time [ 2 ].
However, with LiDAR and TrueDepth technologies included in the newest devices by Apple, even more enhanced possibilities to digitize real objects are offered. [ 5 ] defines the processes of designing, manufacturing, assembling and maintaining products and systems as engineering and divides it into two different types. While forward engineering implies the process of the approach from a highly abstracted idea to the physical implementation of a system, RE refers to already existing objects.
This process is also known as Computer Aided Reverse Engineering (CARE), which involves the steps of data collection, mesh construction and surface fitting [ 6 ]. For data collection, different hardware can be used, whereas mesh construction and surface fitting focus mainly on software solutions. Noncontact methods are subdivided into different scan technologies including photogrammetry, structured light and Time of Flight (ToF). 13,14, Recent developments of commercial devices such as smartphones and tablets have shown that in addition to photogrammetry, scanning is also feasible with LiDAR (light detection and ranging) and TrueDepth. LiDAR includes ToF measurements, which determine the time it takes for an object, particle or wave to travel a distance. Consequently, the infrared camera picks up these light dots again and the pattern is analyzed by software to create a depth map.
In iOS, the operating system of Apple’s smartphones and tablets, TrueDepth is mainly used for 3D face authentication and recognition, while LiDAR enables new features for Augmented Reality by accelerating plane detection. A literature research has shown that factors are reflectance, shape and color of the object as well as surface texture and ambient lightning.
2020 iPad Pro: Does the LiDAR sensor improve spatial tracking?
The new 2020 iPad Pro comes with a promise to improve augmented reality (AR) thanks to its new built-in LiDAR sensor. If it can live up to the hype, the LiDAR sensor may be exactly the missing piece we need to deliver a superb AR experience—a quality of experience that, until now, has been somewhat elusive. The quality of such LiDAR-supported AR primarily depends on how well the LiDAR sensor works with traditional hardware and with optimized software that can enhance tracking algorithms by incorporating new sensory outputs.
The LiDAR Scanner measures the distance to surrounding objects up to 5 meters away, works both indoors and outdoors, and operates at the photon level at nano-second speeds. In everything from self-driving cars to surveying equipment, LiDAR is deployed to help us capture and understand the properties of the world. However, given how prominently Apple is now featuring the AR-assisting capability of LiDAR, the company may be signaling that it has much bigger plans for the technology.
Traditional AR relies mainly on processing live video feed captured by the cameras of a device. For vGIS, this is a capability of utmost importance, since accurate spatial tracking would dramatically improve the user’s experience.
Early reports have suggested that with no compelling software yet available to exploit the new hardware features, iPad’s LiDAR is a solution in search of a problem.
Our objective was to measure the relative performance of the new iPad’s LiDAR sensor and to determine whether it improves positional tracking. The main test groups included Team Android, represented by two of the best (almost) devices that Android has to offer, Samsung Galaxy S10 and Samsung Galaxy Tab S6; and Team Apple, represented by the latest hardware in the iOS lineup, iPhone 11 and 2020 iPad Pro. At the end of each test, we recorded the performance of each device as measured by the deviation of the final position in AR from the milestone marker.
Our main hypothesis was that the LiDAR would offer noticeably better tracking capabilities for the iPad, especially under difficult light conditions. In about 70% to 75% of all cases, Android provided higher accuracy than the iOS devices, ending up closer to the finish marker. In two instances, at the end of the round trip, Galaxy Tab S6 finished 3.5 meters (11.5 feet) from the destination.
Typically, optical tracking relies on the ability of devices to establish spatial anchors or waypoints.
As the device recognizes the area where it has been before, it matches the alignment of AR overlays to the spatial anchors that it is tracking. Despite the challenge, both Android devices performed well during the test, recognizing the starting point accurately and correcting the drift accumulated.
This behavior suggests that iOS (and ARKit) either does not use session-wide spatial anchors or, more likely, releases them from memory faster than Android does. The similarity of these results suggests that Apple has not yet incorporated LiDAR into their spatial-tracking algorithms, instead still relying on the same spatial tracking already included in other iOS devices.
As the sun went down and darkness began to fall, the dim light caused Android cameras to lose focus, resulting in agonizingly frequent drift. As soon as the iOS devices arrived in the open parking lot, tracking quality declined and became only marginally better than that of the Galaxy S10.
Longer walks did cause the HoloLens to drift slightly, especially when challenged by the flatness of the parking lot. We confirmed that with respect to accuracy, precision, and ability to correct drift, HoloLens 2 is far superior to both Android and iOS.
We had expected LiDAR to add an additional source of information that could improve the device’s ability to track in poor light conditions. Even with all of the cameras completely covered, the iPhone continued to accurately project AR visuals for some time. It looks as if Apple managed either to squeeze more accurate hardware into the iPhone or to much more efficiently use its internal sensors to produce readings than Android can. This behavior suggests that Android relies mostly on optical tracking, whereas iOS engages a wide variety of sensors to keep AR stable.
The tablet couldn’t keep up with its smaller and older cousin, the Galaxy S10, and it lagged behind the iOS devices in precision and user experience. However, the current implementation of the LiDAR scanner embedded in the 2020 iPad Pro falls short of expectations. On the other hand, perhaps LiDAR is simply unable to improve position tracking, so that doing so requires an alternative technology. The iPad’s AR experience is geared to work best in indoor environments with consistent lighting, vertical surfaces, and short travel distances around the play area. Yet as a hardware platform, the iPad offers many advantages for outdoor tasks as well, and its inability to make use of LiDAR outputs to improve spatial tracking looks like a missed opportunity.
But in localized environments like intersections, we would expect Android devices to do a better job than iOS by “remembering” their surroundings and thereby correcting any positioning drift.
iPad Pro’s LiDAR Scanner isn’t accurate enough for 3D printing
Tests by the developers of the Halide camera app found that LiDAR built into this new tablet is well-suited for scanning furniture-size objects, but not anything smaller. Sebastiaan De With, a co-founder of Lux Optics, maker of Halide, and his team did some in-depth testing of the tablet’s scanner. That included making “a reality-capture proof of concept app called Esper to showcase the abilities of the LIDAR sensor in the new iPad Pro,” according to a blog post by De With. The capacities of this software are limited because Apple doesn’t give developers access to the data coming from the LiDAR scanner. More than just iPad users should be interested — the 2020 iPhone Pro models coming this fall are widely expected to also include LiDAR scanners.
Be First to Comment