Skip to main content

SWIFT PLAYGROUND / Simple Basic AR

Being able to code AR contents and experience very quickly and easily from iPad has always been my dream. Swift Playground app is simply the perfect candidate since we don't have XCode yet on iPadOS but we have this powerful coding app where we can code with Swift and interactively generate 3D Augmented Reality. And that's just one area that can help me to dig deeper into coding programming and eventually making a useful app.

AR really presents users with the perfect immersive environment to create an interactive virtual 3D scenes and a lot more that they can simply drop into real life. The limit is only your imaginations.

When Swift Playground AR capability paired with other AR app such as Reality Composer to quickly compose 3D scene, attach interactions, audio and behaviour, suddenly you have a very powerful setup for AR. And all can be done via the iPad.

We are in luck however; Apparently we can already procedurally generate AR contents and experience since Swift 5.1 and up!

I recommend you to use iPad Pro 2019 or 2020 with Lidar, but earlier might work as well.

Let's get started...

A STARTING "BASE" AR TEMPLATE:

import PlaygroundSupport
import RealityKit
import UIKit

let arscene = ARView(frame: CGRect(x: 0, y: 0, width: 400, height: 400), cameraMode: .ar, automaticallyConfigureSession: true)

// Create simple box
let box = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005)

// Setup material
let boxMaterial = SimpleMaterial(color: .cyan, isMetallic: false)

// Attach material into Box
let boxEntity = ModelEntity(mesh: box, materials: [boxMaterial])

// Create Anchor Entity
let planeAnchor = AnchorEntity(plane: .horizontal)

// Adjust box position locally
boxEntity.position = [0.0, 0.1, 0.0]

// Attach all into arscene
arscene.scene.addAnchor(planeAnchor)
planeAnchor.addChild(boxEntity)


PlaygroundPage.current.setLiveView(arscene)


The code above will create a basic 3D AR scene with just a Box with Cyan Plastic Material.
You just created a simple 3D AR scene from Swift Playgrounds!


NOTE:
The code above was originally an example from Ash (USDZ Share website) that I modified a little bit. Thanks Ash!
https://usdzshare.com


The Swift Code above can be used as preliminary study. It's not the most beautiful and it does not use STRUCT or CLASS encapsulating the code in a more efficient way, but it should give you the idea and you can start tinkering with the code.

If you try to retype the code above on Swift Playground, try replacing them and creating your own objects. Build your own AR scene.

Here are some more ideas to get you started:
1) Try making all the Primitive Objects you can create: Cube, Sphere, Plane, 3D Text

let text3D = MeshResource.generateText(
    "hey you",
    extrusionDepth: 0.01,
    font: .systemFont(ofSize: 0.03),
    containerFrame: .zero,
    alignment: .right,
    lineBreakMode: .byTruncatingTail)

let text3DMat = SimpleMaterial(color: UIColor(hue: 0.8, saturation: 0.4, brightness: 0.7, alpha: 0.88), roughness: 0.5, isMetallic: true)

let text3dEntity = ModelEntity(mesh: text3D, materials: [text3DMat])

2) Try making different kind of materials. From Simple Materials to something more esoteric like transparent material and Occlusion Material!

// give me random number to make random color material each time the code is loaded
var randomFloat = Float.random(in: 0.0...1.0)

let sphereMatAnim = SimpleMaterial(color: UIColor(hue: CGFloat(randomFloat), saturation: 1.0, brightness: 0.7, alpha: 0.88), roughness: 0.5, isMetallic: true)

// let's create the magic Occlusion Material that will occlude everything
let occlusionMaterial = OcclusionMaterial(receivesDynamicLighting: false)


3) Try loading in .usdz and also .reality (from Reality Composer) into your 3D scene. You can find USDZ assets from Apple website, USDZShare website, Sketchfab, or even make your own using Blender 3D.










4) Re-watch AR video presentations from WWDC 2019 is probably a good place to start and also check out what people have been writing and creating and shared online.

I really only touch the basics. But there are still plenty of things that I wanted to try.
- Creating simple UI to interactively manipulating AR objects
- Multi User experience via Swift Playground
- Making multi page Swift Playground that can help other people to learn.




Comments

Popular posts from this blog

PYTHON PROCESSING / It only really begins ...

Back into Processing again, with Python! While Daniel Shiffman is continuously inspiring us with his CODING TRAIN series on YouTube, using Processing with Java and Javascript language, I decided to free my mind and trying to really do something using Processing and Python language. Installing Python language version on Processing 3 is easy enough, just first download the latest Processing and install the Python language mode via Add Mode button. Other link that might help: https://github.com/jdf/processing.py http://py.processing.org/tutorials/ BLANK MODE As soon as Processing Python Mode, opens up and running I am presented with a  blank environment. Suddenly I recalled my journey in learning programming from zero until now... With Python, outside Processing, commonly people will be introduced to Python IDE or IDLE environment. Something that looks like Console Window or Command Prompt, where we type single liners command. Python Command Line and IDE normally have t

PYTHON / OpenCV, Recreate Uncanny Manga - Anime Style

Can you tell what it is? Computer Vision. Yesterday, I spend almost whole day exploring this opencv module using Python. What I discovered was revealing. Even at the very basic level, I could produce some interesting Image and Video manipulation using all the code collected from documentation and many, many blog tutorials. If you are a total noob like me, I am still getting used to knowing that the CV in OpenCV means Computer Vision! Actuallly, I recalled that I did try to get into OpenCV few years back ago, when I knew no Python and when Python opencv module was probably still early. It was all C++ code and it was a little bit too hard for me. I read a couple of books about opencv at the library, I did not understand a single thing. That was back then. Today, for strange reason, with a bit of knowledge of Python, I can go a little further. EDGE DETECT IN OPENCV Me holding you know what. What leads me this far is my curiosity on how we can replicate Wolfram Langu

WOLFRAM / Making Text With Rainbow Color

Continuing with my Wolfram Mathematica Trial Experience... I watched and went through some more Mathematica introduction videos, read lots of Mathematica documentation and also going through the Wolfram Lab Online again a few times. There are some major learning curves and Mathematica is a lot different from normal programming language. Sometimes there is a lot of interesting "shortcuts", say like: FindFaces[] , WordCloud[] . Sometimes I got a little confused on how one can do iterations. Normally FOR LOOP concept is introduced early, but in Wolfram, because everything is EXPRESSIONS and ENTITY (like OBJECTS), sometimes it gets quite quirky. Mind you I am still in the first impression and having to look at many tutorials. Lots of NEAT EXAMPLES from documentation, but sometimes I got lost. I found Wolfram to be really awesome with LIST and generating list. It's almost too easy when it works visually. I cannot explain them with my own words yet, but there are