Build a custom AR view by rendering camera images and using position-tracking information to display overlay content.
通過渲染相機圖像并使用位置跟蹤信息來顯示覆蓋內(nèi)容來構(gòu)建自定義AR視圖。
Overview
ARKit includes view classes for easily displaying AR experiences with SceneKit or SpriteKit. However, if you instead build your own rendering engine (or integrate with a third-party engine), ARKit also provides all the support necessary to display an AR experience with a custom view.
概述
ARKit包含視圖類,用于通過SceneKit或SpriteKit輕松顯示AR體驗。 但是,如果您改為創(chuàng)建自己的渲染引擎(或與第三方引擎集成),則ARKit還提供必要的所有支持以顯示具有自定義視圖的AR體驗。
In any AR experience, the first step is to configure an?ARSession?object to manage camera capture and motion processing. A session defines and maintains a correspondence between the real-world space the device inhabits and a virtual space where you model AR content. To display your AR experience in a custom view, you’ll need to:
Retrieve video frames and tracking information from the session.
Render those frame images as the backdrop for your view.
Use the tracking information to position and draw AR content atop the camera image.
在任何AR體驗中,第一步是配置一個ARSession對象來管理相機捕獲和運動處理。 會話定義并維護設(shè)備居住的真實世界空間與模擬AR內(nèi)容的虛擬空間之間的對應(yīng)關(guān)系。 要在自定義視圖中顯示您的AR體驗,您需要:
從會話中檢索視頻幀和跟蹤信息。
將這些幀圖像渲染為您視圖的背景。
使用跟蹤信息在相機圖像上定位和繪制AR內(nèi)容。
Note
This article covers code found in Xcode project templates. For complete example code, create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.
注意
本文介紹了Xcode項目模板中的代碼。 有關(guān)完整的示例代碼,請使用增強現(xiàn)實模板創(chuàng)建一個新的iOS應(yīng)用程序,然后從內(nèi)容技術(shù)彈出式菜單中選擇Metal。
Get Video Frames and Tracking Data from the Session
Create and maintain your own?ARSession?instance, and run it with a session configuration appropriate for the kind of AR experience you want to support. The session captures video from the camera, tracks the device’s position and orientation in a modeled 3D space, and provides?ARFrame?objects. Each such object contains both an individual video frame image and position tracking information from the moment that frame was captured.
There are two ways to access?ARFrame?objects produced by an AR session, depending on whether your app favors a pull or a push design pattern.?
If you prefer to control frame timing (the pull design pattern), use the session’s?currentFrameproperty to get the current frame image and tracking information each time you redraw your view’s contents. The ARKit Xcode template uses this approach:
從Session獲取視頻幀和跟蹤數(shù)據(jù)
創(chuàng)建和維護自己的ARSession實例,并使用適合您希望支持的AR體驗類型的Session配置運行它。 該Session從相機捕獲視頻,在建模的3D空間中跟蹤設(shè)備的位置和方向,并提供ARFrame對象。 每個這樣的對象包含從捕獲幀的那一刻起的單獨的視頻幀圖像和位置跟蹤信息。
有兩種方法可以訪問ARSession產(chǎn)生的ARFrame對象,具體取決于您的應(yīng)用是否支持拉式或推式設(shè)計模式。
如果您更愿意控制幀時序(拉式設(shè)計模式),則每次重新繪制視圖內(nèi)容時,請使用會話的currentFrame屬性來獲取當(dāng)前幀圖像和跟蹤信息。 ARKit Xcode模板使用這種方法:
Alternatively, if your app design favors a push pattern, implement the?session:didUpdateFrame:?delegate method, and the session will call it once for each video frame it captures (at 60 frames per second by default).
Upon obtaining a frame, you’ll need to draw the camera image, and update and render any overlay content your AR experience includes.
或者,如果您的應(yīng)用設(shè)計偏好推式模式,請實施Session:didUpdateFrame:delegate方法,并且Session將為其捕獲的每個視頻幀調(diào)用一次(默認情況下為每秒60幀)。
在獲取框架后,您需要繪制相機圖像,并更新和渲染AR體驗包含的任何疊加內(nèi)容。
Draw the Camera Image
Each?ARFrame?object’s?capturedImage?property contains a pixel buffer captured from the device camera. To draw this image as the backdrop for your custom view, you’ll need to create textures from the image content and submit GPU rendering commands that use those textures.
The pixel buffer’s contents are encoded in a biplanar YCbCr (also called YUV) data format; to render the image you’ll need to convert this pixel data to a drawable RGB format. For rendering with Metal, you can perform this conversion most efficiently in GPU shader code. Use?CVMetalTextureCache?APIs to create two Metal textures from the pixel buffer—one each for the buffer’s luma (Y) and chroma (CbCr) planes:
繪制相機圖像
每個ARFrame對象的capturedImage屬性都包含從設(shè)備攝像頭捕獲的像素緩沖區(qū)。 要將此圖像作為自定義視圖的背景,您需要從圖像內(nèi)容創(chuàng)建紋理并提交使用這些紋理的GPU渲染命令。
像素緩沖區(qū)的內(nèi)容以雙平面YCbCr(也稱為YUV)數(shù)據(jù)格式編碼; 要渲染圖像,您需要將此像素數(shù)據(jù)轉(zhuǎn)換為可繪制的RGB格式。 對于使用Metal進行渲染,您可以在GPU著色器代碼中最有效地執(zhí)行此轉(zhuǎn)換。 使用CVMetalTextureCache API從像素緩沖區(qū)創(chuàng)建兩個Metal紋理 - 緩沖區(qū)的亮度(Y)和色度(CbCr)平面各一個:
Next, encode render commands that draw those two textures using a fragment function that performs YCbCr to RGB conversion with a color transform matrix:
接下來,編碼使用片段函數(shù)繪制這兩個紋理的渲染命令,該片段函數(shù)使用顏色轉(zhuǎn)換矩陣執(zhí)行YCbCr轉(zhuǎn)換為RGB轉(zhuǎn)換:
Note
Use the?displayTransformForOrientation:viewportSize:?method to make sure the camera image covers the entire view. For example use of this method, as well as complete Metal pipeline setup code, see the full Xcode template. (Create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.)
注意
使用displayTransformForOrientation:viewportSize:方法確保攝像機圖像覆蓋整個視圖。 例如,使用此方法以及完整的Metal管道設(shè)置代碼,請參閱完整的Xcode模板。 (使用增強現(xiàn)實模板創(chuàng)建一個新的iOS應(yīng)用程序,并從內(nèi)容技術(shù)彈出式菜單中選擇Metal。)
Track and Render Overlay Content?
AR experiences typically focus on rendering 3D overlay content so that the content appears to be part of the real world seen in the camera image. To achieve this illusion, use the?ARAnchorclass to model the position and orientation of your own 3D content relative to real-world space. Anchors provide transforms that you can reference during rendering.?
For example, the Xcode template creates an anchor located about 20 cm in front of the device whenever a user taps on the screen:
跟蹤和渲染疊加內(nèi)容
AR體驗通常專注于渲染3D疊加內(nèi)容,以便內(nèi)容看起來是攝像機圖像中看到的真實世界的一部分。 為了達到這種幻想,請使用ARAnchor類對相對于現(xiàn)實世界空間的3D內(nèi)容的位置和方向進行建模。 錨點提供可在渲染過程中引用的變換。
例如,無論用戶何時點擊屏幕,Xcode模板都會在設(shè)備前創(chuàng)建一個位于設(shè)備前方約20厘米的錨點:
In your rendering engine, use the?transform?property of each?ARAnchor?object to place visual content. The Xcode template uses each of the anchors added to the session in its?handleTap?method to position a simple cube mesh:
在渲染引擎中,使用每個ARAnchor對象的transform屬性來放置可視內(nèi)容。 Xcode模板使用每個在其handleTap方法中添加到會話中的錨來定位簡單的多維數(shù)據(jù)集網(wǎng)格:
Note
In a more complex AR experience, you can use hit testing or plane detection to find the positions of real-world surfaces. For details, see the?planeDetection?property and the?hitTest:types:?method. In both cases, ARKit provides results as?ARAnchor?objects, so you still use anchor transforms to place visual content.
注意
在更復(fù)雜的AR體驗中,您可以使用命中測試或平面檢測來查找真實世界曲面的位置。 有關(guān)詳細信息,請參閱planeDetection屬性和hitTest:types:方法。 在這兩種情況下,ARKit都會提供結(jié)果作為ARAnchor對象,因此您仍然使用錨點轉(zhuǎn)換來放置可視內(nèi)容。
Render with Realistic Lighting
When you configure shaders for drawing 3D content in your scene, use the estimated lighting information in each?ARFrame?object to produce more realistic shading:
當(dāng)您配置著色器以在場景中繪制3D內(nèi)容時,請使用每個ARFrame對象中的估計光照信息來生成更逼真的著色:
Note
For the complete set of Metal setup and rendering commands that go with this example, see the full Xcode template. (Create a new iOS application with the Augmented Reality template, and choose Metal from the Content Technology popup menu.)
注意
有關(guān)此示例中完整的Metal安裝和渲染命令集,請參閱完整的Xcode模板。 (使用增強現(xiàn)實模板創(chuàng)建一個新的iOS應(yīng)用程序,并從內(nèi)容技術(shù)彈出式菜單中選擇金屬。)