MultiChoice is Africa's leading video entertainment company and we service around 20 million customers across 50 countries. Captioning is very important to us at MultiChoice, particularly to allow accessibility of our content. And this is primarily for making content more accessible to our customers that may have hearing difficulties for one. And the second purposes is around repurposing our content, making our content more accessible outside of South Africa, particularly in Africa. The brief that was provided from MultiChoice was actually quite simple. It was help us reduce the time to edit content firstly. Secondly, help us get content to more of our users by taking languages like English as an example, and translating it to other languages that are quite unique to Africa as example Zulu or Kiswahili and vice versa. Taking content from international countries and again translating that into local content or local languages should I say, so that we can sort of give content out there to a wider audience. So we produce a large volume of local content and manually capturing this content is proving to be very difficult due to the volumes and due to the strict turnaround times needed for broadcast. Well we were introduced to AI, and it helped us from creating subtitles from scratch. If you don't have supporting documents for the content. So in that way it made it simpler because it was faster, after comparing the times of starting from scratch and also then using AI. So I'd say this made it much easier. So when we started off with this project, there were basically two major challenges. The first challenge was how to run some of this international content on prem. So the second challenge was how to handle the various accents and dialects within South Africa. The English spoken here has peculiar accents and so forth, and the models needed to perform well for this type of content. And from that, essentially what we did was we created a virtual team because this is not something you fix with one person, and it's not just technology you can drop down and it works. It has to understand content, it has to understand languages. MultiChoice is the first customer to use South African English in their content. They were the first customer to look at Zulu or Afrikaans, et cetera. So what we did was engage our engineering teams from the Microsoft Speech Services and started bringing them into the discussion into the fold and created a program around subtitling. For this application, We basically used a hub and spoke kind of a model where the hub is the place where you have all the common services, which gets shared across use cases. And each of the spokes basically represent a new use case which comes in, gets added on to this framework. So subtitling is one of the spoke translation is another spoke which gets added to the main project hub. So having this type of collaboration on the ground with MultiChoice getting real ground truth information in a real location and providing that feedback back into the organisation really helps because we can fine tune these models, we can continue to improve upon them. Working collaboratively over six months really helped us accelerate this use case and also feedback some of the findings that we had, back to Microsoft so they can also improve on their models. The success of this service relies on the content that you give to the service to train itself. So what's really important and specifically in this project, what makes it unique is that we were able to get a variety of different media sources through MultiChoice, whether it was local content or international, and why that's important is the more content you give to the service, the speech service on Azure, the more it learns and the more you tell it where it makes mistakes, the better it gets. I think one of the most unique features that we got with the Microsoft stack is the ability to process content within our buildings, so on-premise because we do have sensitive content that we cannot send out of the MultiChoice building. So I think with the Microsoft offering, we were able to leverage this and also process this content within our own environment. You get some of the services managed by Microsoft that is actually very great for us because the speech to text model, we don't need to worry about how the model is being trained how it's getting improved. Microsoft handles that for us. So what we as a user just have to integrate with their API and keep using it. The accuracy, I think can be measured in different things. Firstly in timing, it's highly accurate, right. So we get it on... I think I personally give it between 90% and 95%. By collaborating with Microsoft, it really allows us to accelerate our AI use cases and be the leaders in Africa with this type of technology. Our company CEO, Satya Nadella has been talking about the next AI revolution happening in Africa. And we are seeing it's happening right before our eyes with these capabilities on languages and cultural considerations, Addressing issues that affect Africans and for Africa, it is the way to go, and Microsoft is really pushing that agenda forward.