Smart solutions 2: Programming Computer Vision in C/AL code
In the previous post, we looked at an example of how Microsoft has built Azure Machine Learning (ML) functionality into Dynamics NAV. We discussed the Image Analyzer extension, which uses the Computer Vision API from Microsoft Cognitive Services to identify attributes in images attached to items and contact persons.
We’re going to go a little deeper in this post, and look at how to analyze pictures from C/AL code.
Get started with the Computer Vision API
To start with, we need to have the Computer Vision API, either as a free trial or in the Azure portal if we have an Azure subscription. We covered how to get the service in the previous post. Once we have the Computer Vision API, we’re ready to start analyzing pictures from the Item or Contact cards in Dynamics NAV.
The first thing to point out is that there are some new helper objects in Dynamics NAV 2018 that we can use to send pictures for analysis in just a few steps:
- Specify the image source from Media(set), BLOB, or file
- Specify the analysis type. The type can be Tags, Colors, or Faces
- Run an analysis and get results.
The following is a code sample that we can use to check that it actually works. Import and compile this codeunit in Dynamics NAV 2018.
Code samples
Let’s first check that it works – import and compile this codeunit into Dynamics NAV 2018:
OBJECT Codeunit 90909 Image Analysis Demo { OBJECT-PROPERTIES { Date=; Time=; Modified=Yes; Version List=; } PROPERTIES { OnRun=BEGIN //specify URL and Key there. Remember to add /Analyze to the URL! ImageAnalysisManagement. SetUriAndKey('https://westeurope.api.cognitive.microsoft.com/vision/v1.0/Analyze','123456789'); Item.GET('1908-S'); ImageAnalysisManagement.SetMedia(Item.Picture.ITEM(Item.Picture.COUNT)); ImageAnalysisManagement.AnalyzeTags(ImageAnalysisResult); MESSAGE(ImageAnalysisResult.TagName(1)); END; } CODE { VAR Item@1001 : Record 27; ImageAnalysisManagement@1000 : Codeunit 2020; ImageAnalysisResult@1002 : Codeunit 2021; BEGIN END. } }
As you can see, we need just five lines to send a picture of an item to be analyzed by the Computer Vision API. The following walk-through shows more details about the process.
Create a new codeunit and make a new variable:
- Name: ImageAnalysisManagement
- Type: Codeunit
- SubType: 2020
Start by initializing it, which basically reads the URL and key for the Computer Vision API.
ImageAnalysisManagement.Initialize;
The codeunit has three functions to specify the image you want to analyze: SetMedia, SetImagePath, and SetBlob. Here are examples for each each function:
To specify the image
Example 1:
// New variable Item, Record 27 // Item uses MediaSet for Pictures Item.GET('1908-S');ImageAnalysisManagement.SetMedia(Item.Picture.ITEM(Item.Picture.COUNT));
Example 2:
// New variable Contact, Record 5050 // Contact uses Media for Pictures Contact.GET('CT200014');ImageAnalysisManagement.SetMedia(Contact.Image.MEDIAID);
Example 3:
// New variable CompanyInformation, record 79 // CompanyInformation uses BLOB CompanyInformation.GET;CompanyInformation.CALCFIELDS(Picture); // New variable TempBlob, TEMP record 99008535 TempBlob.Blob := CompanyInformation.Picture; ImageAnalysisManagement.SetBlob(TempBlob);
Example 4:
// Just load the picture from disk. Remember that code runs on NST, so the file path is on NST ImageAnalysisManagement.SetImagePath('c:\Pics\MyPicture.jpg');
To specify the type of analysis
Another new variable:
- Name: ImageAnalysisResult
- Type: CodeUnit
- SubType: 2021
Run either:
ImageAnalysisManagement.AnalyzeTags(ImageAnalysisResult);
-or-
ImageAnalysisManagement.AnalyzeColors(ImageAnalysisResult);
-or-
ImageAnalysisManagement.AnalyzeFaces(ImageAnalysisResult);
To get the results
Depending on the type of analysis, ImageAnalysisResult returns a number of tags, colors, or facial details. The recommendation is to use ImageAnalysisResult to get the result as described below. Or, for a quick and easy way to see the “raw” result, design codeunit 2020 and add this line in the InvokeAnalysis function:
Task := HttpContent.ReadAsStringAsync; MESSAGE('Result ' + FORMAT(Task.Result)); // New line JSONManagement.InitializeObject(Task.Result);
However, for a more correct way to get the result, copy the code below into a function and create a new ResultString text variable:
FUNCTION GetResult() ImageAnalysisResult.GetLatestAnalysisType(Analysistype); ResultString := ''; ResultString := 'Recog type: ' + FORMAT(Analysistype) + '\'; // tags, faces and colours CASE Analysistype OF Analysistype::Tags: BEGIN FOR i := 1 TO ImageAnalysisResult.TagCount DO BEGIN ResultString := ResultString + ImageAnalysisResult.TagName(i) + ' ' + ' -- ' + FORMAT(ImageAnalysisResult.TagConfidence(i)) + '\'; END; END; Analysistype::Colours: BEGIN ResultString := ResultString + 'Foreground: ' + ImageAnalysisResult.DominantColorForeground + '\' + 'Background: ' + ImageAnalysisResult.DominantColorBackground + '\'; FOR i := 1 TO ImageAnalysisResult.DominantColorCount DO BEGIN ResultString := ResultString + 'Dominant Colour: ' + ImageAnalysisResult.DominantColor(i) + '\'; END; END; Analysistype::Faces: BEGIN FOR i := 1 TO ImageAnalysisResult.FaceCount DO BEGIN ResultString := ResultString + ImageAnalysisResult.FaceGender(i) + ' ' + FORMAT(ImageAnalysisResult.FaceAge(i)) + '\'; END; END; END;
To handle errors
IF ImageAnalysisManagement.HasError THEN BEGIN ImageAnalysisManagement.GetLastError(ErrorTxt,IsLimit); MESSAGE(ErrorTxt); END;
And that’s all the code you need to run an analysis on items, contacts, or any image you like.
Going further
Check out the API for Computer Vision here: https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa
Note that visualFeatures contains Tags, Faces, and Color, but it can do more than what we’ve implemented in codeunit 2020. If you’re interested in performing a quick experiment, design codeunit 2020 and find this line:
PostParameters := STRSUBSTNO('?visualFeatures=%1',FORMAT(AnalysisType));
Now try to replace it with this, to get a short description of the image:
PostParameters := STRSUBSTNO('?visualFeatures=%1, Description',FORMAT(AnalysisType));
or this to see if Computer Vision recognizes a landmark in the picture:
PostParameters := '?details=Landmarks';
Or the celebrity:
PostParameters := '?details=Celebrities';
If we change this implementation we will get a different response from the Computer Vision API, and we would also want to change the ImageResult (codeunit 2021) implementation to decode the changed response. It’s interesting, but we’ll save that discussion for another time.