Skip to main content

STUDY / iOS SCNKit and ARKit from Zero

MY INTEREST in learning Apple Swift and developing iOS app grows sky high as I started to understand the true iOS device capability, especially when Apple announced ARKit for Augmented Reality content creation at WWDC 2017 just few weeks ago.

This is actually really huge, for me, especially because as indie 3D Artist, I always have this urge to deliver 3D directly into real life environment.

Ideally this AR experience can be presented in realtime by web service like Sketchfab 3D, or perhaps being authorized by Unity or Unreal Engine can be easily transferred and presented as a full 3D render using mobile device like iPhone and iPad, or other smart mobile devices. And apparently yes, with iOS 11, finally Apple says: let's deliver AR.

Free / Paid Apple Developer and XCODE

I am fairly new with the XCODE environment, but starting to be familiar with it and in fact quite enjoyable, especially with Swift language and introduction of Swift Playground, it seriously gives me an extra confident to actually make my own little app.

What Apple did not mention is that even with free Apply Development account, anyone could actually build and test app inside the device with little limitations. If you like to publish at App Store and charge for your apps, you then need to pay the annual Apple Developer fee. Other than that, you can really experiment and make use of your device to the max via Swift and your own custom app.

So if you own iPhone or iPad, as free Apple Developer, you could technically create app for yourself and quickly test it inside your device! This is really powerful and seriously fun. You will need Apple MacBookPro or device running MacOS device.

I am using my old 7 years old MacBookPro with newer iPhone 7 Dual Camera to run and study the ARKit and SCNKit environment. Currently XCODE 9 is still in beta, likewise the iOS 11 and Mac OS High Sierra. Their OS final official release will be in September 2017. We have few months to study this AR thing.

For AR content delivery, you will need A9 chip devices, that means iPhone 6S and up.

Starting with ARKit and SCNKit

Scene Kit and ARKit are related in a way that ARKit can basically run and "project" any 3D Scene (SCN) that you procedurally generated with Swift codes, or what you bundle using 3D package like Blender or others. SCN can be converted easily from Collada DAE format or Alembic or OBJ all inside the XCODE. It can contain Camera, Mesh data, Material, Texture, Animation. With certain limits.

If I am not wrong, you can author your AR and 3D Scene creation using Unity or Unreal Engine, but with what I am trying at the moment, I will try to do everything from inside xcode. Each has pros and cons. Scene or SCN can also be built inside xcode, without coding. Keep this in mind. I will try to cover the basic in this article.

There are actually a lot of topics and subjects I would like to explore and cover. I will start with the very basic SCNKit and ARKit exploration.

Start from the provided ARKit template is the easiest. At Apple Developer website, you can also download their "Place Object" example for ARKit. That is slightly more advanced, but allows user to place 3D objects, transform the object (rotate, scale) with added nice lighting that reacts to the AR.


With that template, we can get Apple ARKit "3D Spaceship" to be presented as 3D object with real life background.

From XCode menu Product - Run (CMD-Run), out of the box you are going to get result like below:


Right away, you can see a realtime smooth tracking, in 60fps, your 3D AR object with real background on your device!

The "3D Ship" is located inside a folder called art.scnassets, there is a SCN called ship.scn.

This ARKit basic example from template does not have the full 3D light or HDR environment light yet. That is something that you can attach via Swift later on.

DIY SCN via Swift

Let's try to create a basic 3D Scene using Swift code.

I am following this documentation from Apple:
https://developer.apple.com/documentation/arkit/arscnview/providing_3d_virtual_content_with_scenekit

This will draw 3D Cube without material into the AR Scene.


Run the app, now, you can see 3D Cube floating as AR, but without material. And we do not have light either.

We need to get familir with Scenekit, basics such as Material, Light, etc:
https://developer.apple.com/documentation/scenekit
https://www.raywenderlich.com/83748/beginning-scene-kit-tutorial

You can code the SCN using Swift and attaching elements and objects one by one, or we can built our own basic SCN using xcode and modify it using Swift.

Whichever works and easiest for you. For the sake of testing it out, I decided to try making SCN as well using the xcode interface. This should be familiar for 3D artist:




I named my scene: "primitive.scn".

Navigating inside the scene is pretty straight forward, trial and error:
- LMB = rotate tumble
- Alt Option + LMB = panning
- Alt + Mouse Scroll Wheel = zoom in and out
- Select object and CMD+F = Focus selected object


Let's try adding some basic 3D primitives: Box, Sphere, Cylinder. Add materials as well via the Parameters or Attributes panel, aka Utilities panel inside xcode. There are a lot of object utilities to play around, such as Chamfer to add rounded edge to Box to get nice highlight at the edge. You also have Text Object.

Actually by default, each 3D Object from Library gets Blinn material. Once you add Light, then you can see some shading rendered.


Try the basic Materials like Blinn, Phong, Lambert, and you can also try the PBR (Physically Based) Material which will react nicely with 3D Light HDR for sure. You have a lot of nice option out of the box. For sure, a 3D contents to look good, you need to consider proper texturing for Diffuse, Reflection, etc. That is something to keep in mind.

NOTE: See some great example of 3D presentation at Sketchfab. I always keep referring to Sketchfab, my favourite 3D publishing that will eventually work with Apple AR via web. The material, light, etc at Sketchfab is realtime works with WebGL.



If you test the 3D SCN as AR scene view now, you will see that your primitive 3D objects (Box, Sphere, Cylinder) are appears HUGE. The unit is in real world, Meter. Let's resize them per individual object and test again.

Watch how the AR objects are actually placed in reference to World XYZ zero position. SCN Camera is not actually the AR Camera. That is another gotcha!

You can use Swift code to enable Camera control: but this could override your user experience with AR, no longer AR Camera is controlling the 6 axis motion:
sceneView.allowsCameraControl = true

Only if you place the SCN camera at 0,0,0 World, you can kind of reference how the layout of the SCN will be how it is seen through user camera, when they run the app. But again, remember we like to have the user using their device and have 6 axis XYZ and walking motion to experience the AR. I will elaborate and explore this area again in the future.

Of course you can always change the layout using Swift code, on the fly, whether using User Interaction, or via animation.


Above, I set the layout properly. Camera at 0,0,0 World, and then Sphere around 0.5 meter in Z in front of camera. The Cylinder and Box are placed on the top right.

If you check the ship.scn, you will see the ship is placed -0.8 in Z axis and 0.1 up in Y axis.



When I run the app, I will see the default position, exactly like that. One important thing: real view "Floor" or reference points anchor should be kept in mind. AR will be placed correctly only if we have proper reference anchor. From quick guess, flat floor is best, but you can go lower than flat floor. It cannot track wall, unless you are clever, maybe not impossible.

Sphere -0.5 unit in Z (front camera) and Cylinder and Box are on the upper right corner.

Remember how we use Swift to place a Box and then we override the SCN with our own scene made inside xcode? If you re-arrange the code slightly, you can reattach the Box we made using Swift code to the SCN.



That is pretty much it for our first exploration of SCN and AR Scene View. Hope this is useful!






Comments

Popular posts from this blog

PYTHON / OpenCV, Recreate Uncanny Manga - Anime Style

Can you tell what it is? Computer Vision. Yesterday, I spend almost whole day exploring this opencv module using Python. What I discovered was revealing. Even at the very basic level, I could produce some interesting Image and Video manipulation using all the code collected from documentation and many, many blog tutorials. If you are a total noob like me, I am still getting used to knowing that the CV in OpenCV means Computer Vision! Actuallly, I recalled that I did try to get into OpenCV few years back ago, when I knew no Python and when Python opencv module was probably still early. It was all C++ code and it was a little bit too hard for me. I read a couple of books about opencv at the library, I did not understand a single thing. That was back then. Today, for strange reason, with a bit of knowledge of Python, I can go a little further. EDGE DETECT IN OPENCV Me holding you know what. What leads me this far is my curiosity on how we can replicate Wolfram Langu

PYTHON PROCESSING / It only really begins ...

Back into Processing again, with Python! While Daniel Shiffman is continuously inspiring us with his CODING TRAIN series on YouTube, using Processing with Java and Javascript language, I decided to free my mind and trying to really do something using Processing and Python language. Installing Python language version on Processing 3 is easy enough, just first download the latest Processing and install the Python language mode via Add Mode button. Other link that might help: https://github.com/jdf/processing.py http://py.processing.org/tutorials/ BLANK MODE As soon as Processing Python Mode, opens up and running I am presented with a  blank environment. Suddenly I recalled my journey in learning programming from zero until now... With Python, outside Processing, commonly people will be introduced to Python IDE or IDLE environment. Something that looks like Console Window or Command Prompt, where we type single liners command. Python Command Line and IDE normally have t

WOLFRAM / Making Text With Rainbow Color

Continuing with my Wolfram Mathematica Trial Experience... I watched and went through some more Mathematica introduction videos, read lots of Mathematica documentation and also going through the Wolfram Lab Online again a few times. There are some major learning curves and Mathematica is a lot different from normal programming language. Sometimes there is a lot of interesting "shortcuts", say like: FindFaces[] , WordCloud[] . Sometimes I got a little confused on how one can do iterations. Normally FOR LOOP concept is introduced early, but in Wolfram, because everything is EXPRESSIONS and ENTITY (like OBJECTS), sometimes it gets quite quirky. Mind you I am still in the first impression and having to look at many tutorials. Lots of NEAT EXAMPLES from documentation, but sometimes I got lost. I found Wolfram to be really awesome with LIST and generating list. It's almost too easy when it works visually. I cannot explain them with my own words yet, but there are