# Hardware configuration

Studio uses advanced technology to support up to a max of 128 neural engines and/or GPUs.

TIP

The limitation of 128 neural engines is caused by network configuration. In our roadmap, 1024 engines will be supported.

# Capability matrix

Each neural engine and GPUs have its own detection capabilities.

Processor Capabilities Supported models FPS
Apple M1 Classifier CoreML 50
Detector CoreML, Darknet 50
OCR Built-in 10
Pose Built-in 10
Anomaly Built-in 0.3
Track Built-in 5
Apple M1 Pro/Max Classifier CoreML 50
Detector CoreML, Darknet 50
OCR Built-in 10
Pose Built-in 10
Anomaly Built-in 0.3
Track Built-in 5
Apple M1 Ultra Classifier CoreML 100
Detector CoreML, Darknet 100
OCR Built-in 20
Pose Built-in 20
Anomaly Built-in 0.7
Track Built-in 12
NVIDIA GeForce RTX 2080 Ti Classifier Tensorflow 25
Detector Tensorflow, Darknet 25
OCR Not supported N/A
Pose Built-in 8
Anomaly Built-in 1
Track Built-in 5
NVIDIA GeForce RTX 3080* Classifier Tensorflow 65
Detector Tensorflow, Darknet 65
OCR Not supported N/A
Pose Built-in 20
Anomaly Built-in 2.5
Track Built-in 5

Note: * The FPS is estimated based on the relative performance of TFLOPS FP32 to NVIDIA GeForce RTX 2080 Ti. Actual performance may vary.

Training capabilities are different for different cluster configuration.

Processor Capabilities Supported models
Apple M1/M1 Pro/M1 Max Classifier CoreML
Detector CoreML
Apple M1 Ultra Classifier CoreML
Detector CoreML
Azure Custom Vision Classifier CoreML, Tensorflow
Detector CoreML, Tensorflow

TIP

Currently we only support on-device training on Apple M1 processors. Training on NVIDIA processors can be performed using other third-party software systems such as JupyterLab, but it cannot be directly trained in Studio.

# Starter design

To start with a small team configuration of 5 developers, you can subscribe to Studio Business. Each developer can install EdgeAI App in the local machine for ML acceleration.

TIP

In Business version, Studio does not support on-premise storage. However, it supports using storage engine from Amazon S3 and Azure Blob Storage by simple configuration.

# EdgeAI App

EdgeAI App is an application which can be run in iOS, iPadOS and macOS. The purpose of using the app is to accelerate the machine learning operations in Studio. The app can also be executed standalone as an edge device to run pipelines in guest mode.

# On-premise cluster design

With all capabilities, you can design the cluster which consists of a pair of load balancers and manager nodes and a group of neural nodes from Apple M1 and NVIDIA.

# Component view

# DMZ and load balancers

A set of load balancers should be in placed to balance the incoming requests to the manager nodes. At least 2 nodes are required for resilience.

# Manager nodes

Manager nodes are responsible for cluster management. At least 3 nodes are required to form a quorum.

# Neural nodes

Neural nodes can be a mix of different processors. Manager nodes will automatically discover the capability of each neural node.

# How many neural nodes shall be provisioned?

The number of nodes depends on how many frames the platform shall be processed in a second as well as what capabilities it should support.

For example, you want to provide a platform with a capacity of 4 x 25-FPS streams of object detection. In total, the platform shall be able to process 100 FPS (= 4 x 25). Since one Apple M1 provides a neural performance of 20 FPS, 5 neural nodes (= 100 / 20) are needed.