The need

Creative design process begins with collaboration on a whiteboard where designers share ideas. Once a design is drawn, it is usually manually translated into a working HTML wireframe. This takes time and delays the design process.

The idea

We can use Computer Vision to build a system that understands what a designer has drawn on a whiteboard, then translates that understanding to HTML code. This way we can generate HTML code directly from a hand-drawn image.

The solution

Custom Vision service trains models to detect HTML objects, then uses text recognition to extract handwritten text in the design. By combining the object and the text, we can generate HTML snippets of different design elements.

Technical details for Sketch2Code

Computer Vision Service

Computer Vision is a discipline inside artificial intelligence that gives an application the capability to see and understand what it is seeing. Using Microsoft Cognitive Services, we can train Custom Computer Vision with millions of images and enable object detection for a wide range of types of objects.

In this case, we trained the model to recognize hand-drawn web design elements like a textbox or button. We use the text recognition functionality present in the Computer Vision Service to extract hand-written text present in the design. By combining the design element and the extracted content, we can generate HTML snippets of the different elements in the design. We then can infer the layout of the design from the position of the identified elements and generate the final HTML code.

Learn about Azure services

Projects related to Sketch2Code

Browse more innovation sandbox projects

Explore the possibilities of AI

Make artificial intelligence real for your business today.