![]() You can also similarly deploy the model to on-premises IoT Edge and Azure Stack Edge devices.įor this reference architecture, the model deploys to Azure Stack Edge to make the model available for inference on-premises. You can use Compute instances, managed cloud-based development workstations, for both training and inference of models. You can package a model to run directly on a Functions instance. You can use Azure Kubernetes Service to automatically scale the model's Docker container image for high-scale production deployments. You can deploy the model's Docker container image directly to a container group. You can deploy the models to a private Docker Registry such as Azure Container Registry since they are Docker container images. You can use the Machine Learning command-line interface (CLI), the R SDK, the Python SDK, designer, or Visual Studio Code to build the scripts that are required to train your model.Īfter training and readying the model to deploy, you can deploy it to various Azure services, including but not limited to: A dataset represents a single copy of your data in storage that's directly referenced by Machine Learning. Training and deploying a modelĪfter preparing and storing data in Blob storage, you can create a Machine Learning dataset that connects to Azure Storage. You can verify the data transformation and transfer by either mounting the local or cloud share, or by traversing the Azure Storage account. All data in the cloud share will automatically upload to the associated storage account. In the Azure Stack Edge resource on the Azure cloud platform, the cloud share is backed by an Azure Blob storage account resource. IoT Edge modules are registered as Docker container images in Container Registry. You can add custom or built-in modules to your IoT Edge device or develop custom IoT Edge modules. Then, the module transfers the transformed data to an Azure Stack Edge cloud share. For example, an IoT Edge module can collect data from an Azure Stack Edge local share and transform the data into a format that's ready for machine learning. These IoT Edge devices are associated with an Azure IoT Hub resource on the Azure cloud platform.Įach IoT Edge module is a Docker container that does a specific task in an ingest, transform, and transfer workflow. This transformation is done by an Azure IoT Edge device that's deployed on the Azure Stack Edge device. Recommendations Ingesting, transforming, and transferring data stored locallyĪzure Stack Edge can transform data sourced from local storage before transferring that data to Azure. Build software applications that need to make inferences about users, both at a physical location and online.The model is then used both on-premises and in the cloud it's retrained regularly as new data arrives. Create long-term research solutions where existing on-premises data is cleaned and used to generate a model.Run local, rapid machine learning inference against data as it's ingested and you have a significant on-premises hardware footprint.Typical uses for extending inference include when you need to: This solution is ideal for the telecommunications industry. The data can be in any local storage solution, including Azure Arc deployments. Local data references any data that's used in the training of the machine learning model. Azure Stack Edge includes compute acceleration hardware that's designed to improve performance of AI inference at the edge. Data is preprocessed at the edge before transfer to Azure. Azure Stack Edge is an edge computing device that's designed for machine learning inference at the edge. Container Registry builds, stores, and manages Docker container images and can store containerized machine learning models. Container Registry is a service that creates and manages the Docker Registry. These models can then deploy to Azure services, including (but not limited to) Azure Container Instances, Azure Kubernetes Service (AKS), and Azure Functions. Machine Learning lets you build, train, deploy, and manage machine learning models in a cloud-based environment. The architecture consists of the following steps: Architectureĭownload a Visio file of this architecture. Azure Stack Hub delivers Azure capabilities such as compute, storage, networking, and hardware-accelerated machine learning to any edge location. This reference architecture illustrates how to use Azure Stack Edge to extend rapid machine learning inference from the cloud to on-premises or edge scenarios.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |