ARKit and Core ML: Core ML
The international AI market is anticipated to reach approximately $48 billion by the end of the next 10 years. It is expected to grow 52% at compound annual growth rate. Similar to Apple situation with AR/VR, the company wasn’t active on artificial reality and machine learning fronts. There were only some advancements with Siri. In its turn, such competitors as IBM, Facebook, Amazon, and Google, were working hard on these technologies and reached tangible results. So, we expect something from Apple that will steal the show; they should have some trump cards in their pocket.
1. What is Core ML in general?
Core ML is a foundational framework that optimizes machine learning of Apple products. The introduction of Core ML is very important for app-developers, as now it is possible to intertwine the best practices from AI and ML in the apps they create. Developers need not spend endless hours to code such products, as there is little manual coding required. What’s more, the platform has complicated deep learning techniques and provides more than 30 possible layer formats. Now, custom machine learning tools are open to the developers in their future apps related to any Apple platform.
2. NLP → Machine learning
Previously, in 2012, natural language processing was introduced on iOS 5 with NSLinguistic Tagger. iOS 8 created ‘Metal’ - a tool aimed to improve the performance of graphical processing units (GPUs) for an advanced gaming experience. After this, in 2016, the framework called Accelerate that processed signals and graphic data, got an enhancement – the Basic Neural Networks for Subroutines. As the Core ML framework was designed for both technologies, Accelerate and Metal, there was no need to send data to a centralized server anymore. The Core ML framework can operate fully and efficiently within a device, increasing the overall security of user’s information.
3. How does Core ML function?
The Core ML framework functionality consists mainly of two stages. In the first stage, the algorithms of ML operate with available training data (the result would be more precise if the data size is large), creating the training model. The second step is the conversion of the training model into a file with mlmodel format. The Core ML file is then created; after that, the AI and ML features of a high level can function within iOS apps helped by ML Model files. The total functionality of these mlmodel files is directed on creation of the specific trained models later converted into Core ML Models to create smart predictions.
Core ML Model includes class labels and all inputs/outputs, and provides a description of the layers utilized in the framework. The Xcode IDE can make Objective-C or Swift wrapper if needed, until the model is contained in an app project.
4. Vision
Though ARKit and Core ML were frameworks in the limelight in 2017, the introduction of Vision - a new computer vision and image analysis framework is also of high importance. Vision should function in cooperation with Core ML and can provide a wide scope of feature search, scene recognition, and detection features starting from ML-provided graphic materials evaluation and face recognition, to text and horizon\plain lines distinguishing, image adjustment, object tracking, and barcode reading. The Core ML model wrappers are also created by the Vision framework. However, Vision framework can be useful and profitable merely for image-based models.
Similar to Metal and Accelerate frameworks, Vision works with the software development kits like iOS 11, tvOS 11 and macOS 10.13 beta.
Continue reading about Technologies