Paris Buttfield-Addison


Dr Paris Buttfield-Addison is co-founder of Secret Lab a game development studio based in beautiful Hobart, Australia. Secret Lab builds games and game development tools, including the multi-award-winning ABC Play School iPad games, the BAFTA- and IGF-winning Night in the Woods, the Qantas airlines Joey Playbox games, and the Yarn Spinner narrative game framework. Previously, Paris was mobile product manager for Meebo, a ground-breaking 'Web 2.0' startup which was acquired by Google. Paris particularly enjoys game design, statistics, the blockchain, machine learning, and human-centered technology research and writes technical books on mobile and game development (more than 20 so far) for O’Reilly Media. He is currently writing 'Practical AI with Swift', 'Head First Swift', and the 'Unity Game Development Cookbook'. He holds a degree in medieval history and a PhD in computing.

On-device Neural Style Transfer

Level: General

Neural Style Transfer (NST) is a great machine learning technique for applying the style of one image to a separate, entirely different image. Using NST you can make pictures of your cat look like Van Gogh's 'The Starry Night', or snaps of your dinner look like da Vinci's 'Mona Lisa'. 

This session demonstrates how easy it is to perform previously complex machine learning tasks, like NST, locally on an iOS device using CoreML. The future of personal machine learning features might be privacy-centric, on-device machine learning. There's no need to outsource your ML-features to the cloud. 

In a world of increasing awareness of the value of privacy and security, on-device ML-features are an important component in any AI experts toolkit. We'll explain how they work, what they can be used for, and demonstrate their power. Come and learn just how powerful a portable device can be.

Practical Artificial Intelligence with Swift

Level: Beginner+

This tutorial explores the latest in machine learning using TensorFlow, and on-device, local AI with Swift and Apple platforms.

Learn how to apply the Vision, Core ML, and CreateML frameworks to solve practical problems in object detection, face recognition, and more. These frameworks run on-device, so they work quickly with no network access, making them cost effective and user-privacy conscious.

You’ll combine Apple’s frameworks with open source libraries such as TensorFlow, to create an iOS app that makes it look easy to detect faces and facial features, detect and classify objects in photos, and expose these features to the user.

Topics include:

  • The basics of machine learning: The differences between, and reasons to be interested in, supervised learning, unsupervised learning, and reinforcement learning and the different types of problems each can address
  • What Apple’s CoreML, CreateML, and Vision frameworks do
  • How to set up your Swift-based iOS development environment for machine learning
  • How to work with TensorFlow, the popular open source Python neural network and machine learning library, to create, manipulate, and bring models into CoreML
  • How to implement machine learning-based features in your iOS apps and load trained models for use in machine learning

In a privacy-conscious world, practical on-device machine learning is the future. Learn it here.

Attendees will need to bring a Mac laptop capable of running the latest public version of Xcode (free). Attendee should have programming experience using any modern language. Swift experience is not necessarily required.