Natural Language Classifier

Interprets the intent behind texts and can be trained for any domain

My role: visual designer, prototyper, HTML/CSS, Sass

The IBM Watson Natural Language Classifier (GA 2015) enables developers with no prior knowledge of machine learning to easily set-up their own cognitive application. The service interprets the intent behind what someone says. For example, if you're messaging with a customer service agent online and type, “How do I pay my bill?,” the Natural Language Classifier service would understand the intent behind your question and provide the correct response.

Visual exploration

I began the visual exploration phase by designing style tiles. They consisted of a single page and would include a variety of elements and patterns such as color palette, headers, text styles, buttons, form fields, labels, etc. Each style tile provided a quick preview of what the overall UI could look and feel like and was helpful in determining a design direction.

Style tile explorations

Two things that needed to be followed during the design process:

  1. Align with the IBM Design Language
  2. Follow the WCAG and 508 accessibility requirements

The IBM Design Language was launched in 2014. It included best practices, resources, and inspiration to help designers achieve a consistent visual language across their products that aligned with the IBM brand. In order to achieve this, getting continuous feedback from other designers was crucial. I was fortunate enough to sit next next to my design team, so I was able to get continual feedback on a daily basis. We also had weekly touchpoints with the development lead and offering manager to make sure we were aligned on requirements and priorities.

Making software accessible to all users requires special considerations. For example, content that conveys meaning needs to follow the WCAG 2.0 level AA minimum requirements of a 4.5:1 contrast ratio. This includes items such as text, buttons, graphs, and tooltips. A couple online tools that I found useful were the NSCU color palette accessibility evaluator and the WebAIM color contrast checker.

Exploration of color combinations on light and dark UI

One issue we oftentimes ran into was not being able to use light colors for important information. It either lacked the necessary contrast ratio or the color already had implied meaning (think of red denoting an error). An upside we found when exploring dark UI’s was that it allowed greater freedom to use bright, saturated colors.

Hi-fidelity designs

I worked closely with the UX designer on my team to translate the existing wireframes into hi-fidelity designs. Each user flow would address a specific use case that the user would want to perform. For example, I designed a flow for first-time use, adding content, searching, filtering, etc.

First-time use modal
Training data screen that shows mapped classes and texts

User testing

Working closely with the UX designer and user researcher, we ran initial tests with proxy users in the studio. Sitting one-on-one with a designer, we tested interaction patterns, information architecture, microcopy, and gathered general feedback.

Testing with users and getting continuous feedback allowed us to quickly improve the designs and make refinements.

Prototyping

During this time, we began developing a pattern library. (You can read about that next in the NLC pattern library section.) Having the elements and patterns easily accessible, enabled me to quickly prototype the tool using the pattern library as a starting point. This was a valuable asset for the development team too as they began building the front-end. They were able to see the exact styles that were being applied to each element. This drastically cut-down on the need for red-lining documents or tedious back and forth communication between design and development.