Augmented reality (AR) experiences can be implemented with three common patterns: tabletop, flyover, and world-scale.
Flyover – With flyover AR you can explore a scene using your device as a window into the virtual world. A typical flyover AR scenario starts with the scene’s virtual camera positioned over an area of interest. You can walk around and reorient the device to focus on specific content in the scene.
Tabletop – Tabletop AR provides scene content anchored to a physical surface, as if it were a 3D-printed model. You can walk around the tabletop and view the scene from different angles.
World-scale – A kind of AR scenario where scene content is rendered exactly where it would be in the physical world. This is used in scenarios ranging from viewing hidden infrastructure to displaying waypoints for navigation. In AR, the real world, rather than a basemap, provides the context for your GIS data.
Flyover
Tabletop
World-scale
On screen, flyover is visually indistinguishable from normal scene rendering.
In tabletop, scene content is anchored to a real-world surface.
In world-scale AR, scene content is integrated with the real world.
Support for augmented reality is provided through tools available in each ArcGIS Runtime API Toolkit.
ArcGISARView uses an underlying ARKit or ARCore view and an ArcGIS Runtime AGSSceneView. Use the sceneView method to access the Runtime scene view.
Use the following methods on ArcGISARView to configure AR:
translationFactor - controls the relationship between physical device position changes and changes in the position of the scene view's camera. This is useful for tabletop and flyover AR.
originCamera - controls the initial position of the scene view's camera. When position tracking is started, ArcGISARView transforms the scene view camera's position using a transformation matrix provided by ARKit or ARCore. Once the origin camera is set, the manipulation of the scene view's camera is handled automatically.
setInitialTransformation – takes a point on the screen, finds the surface represented by that point, and applies a transformation such that the origin camera is pinned to the location represented by that point. This is useful for pinning content to a surface, which is needed for tabletop AR.
arSCNViewDelegate – forwards messages from ARKit to your app. You can use this to listen for tracking status changes, render content using native iOS rendering (SceneKit).
locationChangeHandlerDelegate - forwards messages from the location data source associated with the ArcGISARView to your app. You can use this to listen for location, heading and status changes.
In addition to the toolkit, you'll need to use the following ArcGIS Runtime features provided by the underlying scene view when creating AR experiences:
Scene view space effect control — Disable rendering the 'starry sky' effect to display scene content on top of a camera feed.
Scene view atmosphere effect control — Disable rendering the atmosphere effect to avoid obscuring rendered content.
Surface transparency — Hide the ground when rendering world-scale AR because the camera feed, not the basemap, is providing context for your GIS content. You can use a semitransparent surface to calibrate your position in world-scale AR.
Scene view navigation constraint — By default, scene views constrain the camera to being above the ground. You should disable this feature to enable users to use world-scale AR underground (for example, while in a basement). The navigation constraint will interfere with tabletop AR if the user attempts to look at the scene from below.
To use ArcGISARView, first add it to the view, then configure the lifecycle methods to start and stop tracking as needed.
Before you can use augmented reality, you'll need to request location and camera permissions.
On iOS, ensure the following properties are set in info.plist:
Privacy - Camera Usage Description
Privacy - Location When In Use Usage Description
The deployment target should be set to a supported version of iOS (see System requirements for details).
If you’d like to restrict your app to installing only on devices that support ARKit, add arkit to the required device capabilities section of info.plist:
Once you have installed the toolkit, configured your app to meet privacy requirements, requested location permissions, and added an ArcGISARView to your app, you can begin implementing your AR experience.
Understand Common AR Patterns
There are many AR scenarios you can achieve with Runtime. This SDK recognizes the following common patterns for AR:
Flyover – Flyover AR is a kind of AR scenario that allows you to explore a scene using your device as a window into the virtual world. A typical flyover AR scenario will start with the scene’s virtual camera positioned over an area of interest. You can walk around and reorient the device to focus on specific content in the scene.
Tabletop – A kind of AR scenario where scene content is anchored to a physical surface, as if it were a 3D-printed model. You can walk around the tabletop and view the scene from different angles.
World-scale – A kind of AR scenario where scene content is rendered exactly where it would be in the physical world. This is used in scenarios ranging from viewing hidden infrastructure to displaying waypoints for navigation. In AR, the real world, rather than a basemap, provides the context for your GIS data.
Each experience is built using some combination of the features in Runtime and the toolkit and some basic behavioral assumptions.
AR pattern
Origin camera
Translation factor
Scene view
Base surface
Flyover AR
Above the tallest content in the scene
A large value to enable rapid traversal; 0 to restrict movement
Space effect: Stars Atmosphere: Realistic
Displayed
Tabletop AR
On the ground at the center or lowest point on the scene
Based on the size of the target content and the physical table
Space effect: Transparent Atmosphere: None
Optional
World-scale AR
At the same location as the physical device camera
1, to keep virtual content in sync with real-world environment
Space effect: Transparent Atmosphere: None
Optional for calibration
Add tabletop AR to your app
Tabletop AR allows you to use your device to interact with scenes as if they were 3D-printed models sitting on your desk. You could, for example, use tabletop AR to virtually explore a proposed development without needing to create a physical model.
Implement tabletop AR
Tabletop AR often allows users to place scene content on a physical surface of their choice, such as the top of a desk, for example. Once the content is placed, it stays anchored to the surface as the user moves around it.
Listen for ARKit tracking tracking status changes and provide feedback to the user, for example when the user needs to move the phone more slowly or turn on more lights.
When tracking is ready and at least one plane has been found, wait for the user to tap. ARSCNViewDelegate defines renderer(:didAdd node, for anchor) which you can use to detect planes.
Once the user has tapped a point, call setInitialTransformation . The toolkit will use the native platform’s plane detection to position the virtual camera relative to the plane. If the result is true, the transformation has been set successfully and you can place the scene.
Create and display the scene. Set the navigation constraint on the scene’s base surface to .none .
For demonstration purposes, this code uses the Philadelphia mobile scene package because it is particularly well-suited for tabletop display. You can download that .mspk and add it to your project to make the code below work. Alternatively, you can use any scene for tabletop mapping, but be sure to define a clipping distance for a proper tabletop experience.
Find an anchor point in the scene. You can use a known value, a user-selected value, or a computed value. For simplicity, this example uses a known value. Place the origin camera at that point.
Set the translation factor on the ArcGIS AR view so that the whole scene can be viewed by moving around it. A useful formula for determining this value is translation factor = virtual content width / desired physical content width. The desired physical content width is the size of the physical table while virtual content width is the real-world size of the scene content; both measurements should be in meters. You can set the virtual content width by setting a clipping distance.
Set expectations about lighting before starting the experience. AR doesn't work well in low light.
Don't allow users to view arbitrary content in AR. Use curated content that has been designed.
Keep scene content focused; this should ideally be a single geographic feature or a small area of a city.
Ensure that the scene looks good from below; your users may attempt to look at the scene from any angle.
When using scenes that aren’t confined to a small area, set a clipping distance to ensure that content is limited to a small area on the physical surface.
Add flyover AR to your app
Flyover AR displays a scene while using the movement of the physical device to move the scene view camera. For example, you can walk around while holding up your device as a window into the scene. Unlike other AR experiences, the camera feed is not displayed to the user, making flyover more similar to a traditional virtual reality (VR) experience.
Flyover is the simplest AR scenario to implement, as there is only a loose relationship between the physical world and the rendered virtual world. With flyover, you can imagine your device as a window into the virtual scene.
Place the origin camera above the content you want the user to explore, ideally in the center. Typically, you’ll want to place the origin camera above the highest point in your scene.
Set the translation factor to allow rapid traversal of the scene. The translation factor defines the relationship between physical device movement and virtual camera movement. To create a more immersive experience, set the space effect on the scene view to .stars and the atmosphere effect to .realistic. Disable the navigation constraint on the scene's base surface to prevent camera position problems near the ground.
Consider the following guidelines for creating high-quality flyover AR experiences:
Provide actionable feedback to the user when there are ARKit tracking issues. Tracking problems caused by environmental factors like low light can break the AR experience.
Set expectations about the environment before starting the AR experience. Flyover AR doesn't work well in low light or in tight, constrained spaces.
Start the AR experience in the center of the area of interest for your scene. Users are likely to move around freely, including rotating and looking behind where they originally start.
Add world-scale AR to your app
A world-scale AR experience is defined by the following characteristics:
The scene camera is positioned to precisely match the position and orientation of the device’s physical camera
Scene content is placed in the context of the real world by matching the scene view’s virtual camera position and orientation to that of the physical device camera.
Context aids, like the basemap, are hidden; the camera feed provides real-world context.
Some example use cases of world-scale AR include:
Visualizing hidden infrastructure, like sewers, water mains, and telecom conduits.
Maintaining context while performing rapid data collection for a survey.
Visualizing a route line while navigating.
Configure content for world-scale AR
The goal of a world-scale AR experience is to create the illusion that your GIS content is physically present in the world around you. There are several requirements for content that will be used for world-scale AR that go beyond what is typically required for 2D mapping.
Ensure that all data has an accurate elevation (or Z) value. For dynamically generated graphics (for example, route results) use an elevation surface to add elevation.
Use an elevation source in your scene to ensure all content is placed accurately relative to the user.
Don't use 3D symbology that closely matches the exact shape of the feature it represents. For example, do not use a generic tree model to represent tree features or a fire hydrant to represent fire hydrant features. Generic symbology won’t capture the unique geometry of actual real-world objects and will highlight minor inaccuracies in position.
Consider how you present content that would otherwise be obscured in the real world, as the parallax effect can make that content appear to move unnaturally. For example, underground pipes will ‘float’ relative to the surface, even though they are at a fixed point underground. Have a plan to educate users, or consider adding visual guides, like lines drawn to link the hidden feature to the obscuring surface (for example, the ground).
By default, ArcGIS Runtime renders content over a large distance, which can be problematic when you are trying to view a limited subset of nearby features (just the pipes in your building, not for the entire campus, for example). You can use the clipping distance to limit the area over which scene content renders.
Location tracking options for world-scale AR
There are a few strategies for determining the device’s position in the world and maintaining that position over time:
Use the device’s location data source (for example, GPS) to acquire an initial position and make further position updates using ARKit and ARCore only.
Use the location data source continuously.
With continuous updates, the origin camera is set every time the location data source provides a new update. With a one-time update, the origin camera is set only once.
There are benefits and drawbacks to each approach that you should consider when designing your AR experience:
One-time update
Advantage: ARKit/ARCore tracking is more precise than most location data sources.
Advantage: Content stays convincingly pinned to its real-world position, with minimal drifting or jumping.
Disadvantage: Error accumulates the further you venture from where you start the experience.
Continuous update
Advantage: Works over a larger area than ARKit or ARCore.
Disadvantage: Visualized content will jump as you move through the world and the device’s location is updated (as infrequently as once per second rather than ARKit’s 60 times per second).
Disadvantage: Because the origin camera is constantly being reset, you can’t use panning to manually correct position errors.
You don’t need to make a binary choice between approaches for your app. Your app can use continuous updates while the user moves through larger areas, then switch to a primarily ARKit or ARCore-driven experience when you need greater precision.
Using ArcGIS Runtime, the choice of location strategy is specified with a call to startTracking() on the AR view control. To change the location update mode, stop tracking and then resume tracking with the desired mode.
Configure the ArcGISARView with a location data source. The location data source provides location information for the device. The AR scene view uses the location data source to place the virtual scene camera close to the location of the physical device’s camera.