Red Hat is the world's leading provider of open source enterprise software. Their Integration Service Registry helps developer teams work together more efficiently. As an interaction design intern – and the first designer on the product – my challenge was to establish core interaction patterns that could scale to accommodate future releases.

Context

Summer 2020
Enterprise web app

Tools

Sketch, Marvel, Patternfly design system

Background


"A datastore for artifacts: schemas and API designs"

Imagine you're a developer for PetCorp, and your team is working on a new adoption website. To get the data that is displayed about the pets on the site, developers politely make requests to the Petstore API. Suddenly, management barges in with a new requirement: they want the site to show the personality of each pet! Another developer makes the changes to the design of the API and updates the database's "Pet" schema:

This is where the Service Registry comes in. Now, developers can store all the data designs (APIs, database schemas) and update them as they evolve – so that everyone on the development team can stay up-to-date. It's kind of like git, but just for data.

Design goal

The Service Registry is already in use without a UI. In preparation for future product development, the engineers built an MVP version with a basic one. My goal was to improve the design so that it would be compatible with future business requirements and consistent with other products in the Integration suite.

Some constraints for the design:

  1. Backwards compatibility – The new design had to be compatible with the way the existing backend of the Service Reigstry was structured.
  2. Aligned with Patternfly 4 – Avoid the use of UI components not supported by the design system

My role

Within the product team, I worked primarily with the lead engineer. He was the main stakeholder with whom I validated most of the design decisions. While I was individually responsible for all of the design deliverables (competitive analysis, IA diagrams, early-stage explorations, prototypes), I had incredible support and a source of feedback from 3 designers through weekly design reviews and 1:1s.

Gaining confidence


Starting with ambiguity

Welcome to the project kick-off! Features and requirements were still up in the air, and I was working in parallel with product and engineering's work. After scoping things down, we identified 4 main UX challenges. I mainly focused on these 2:

  1. All artifacts view – How might we help users find an artifact quickly and with confidence?
  2. Artifact details view – How might users interact with an artifact and its versions?

Go blue-sky, then get grounded in reality

We started doing some blue-sky thinking after discussing the competitive analysis. At low-fidelity, I quickly created some explorations for further discussion, however in the interest of time, we decided to focus on solving problems that could be addressed using the existing functionality.

For user goals, not just tasks

I was designing for a general "Developer" persona, although I had little information about their workflow for anything specific to the Service Registry. So more insight into user goals and context, I rewrote engineering's task-based use cases to follow the job story approach. For example:

Task: Find an artifact (from the list or through a search query)

Job story 1: When an artifact needs updating, I want to get quickly to the place to make changes so that I do not waste time looking for it.

Job story 2: When I need a data source, I want to find something relevant so that I can make proper use of it in a project I am working on.

What a game-changer! Before, I didn't even realize there were two distinct contexts for that one task. As imperfect the stories were (they were full of unchecked assumptions), they were able to inform my approach solving the first design challenge.

The Design – Challenge #1


HMW help users find an artifact quickly and with confidence?

Displaying data in a table vs. grid

Based on the stories, it seemed like there were 2 distinct contexts to design for: hunting for a specific artifact and discovering an artifact. It would make sense to organize and display the data in a way that is suitable for those different contexts.

A table or list with filtering and sorting options would support a user who is searching/systematically locating a specific artifact. A grid view with cards is better suited for "discovery", where a user may need the help of longer descriptions to understand what the artifact is.

We decided it was feasible to go with both because the ability to toggle between a list and grid view already existed in another closely related Red Hat product.

Preview for confidence

To make up for any shortcomings in the data shown in the list or card items, I thought a way to preview might help users confirm they are accessing the right artifact. The ability to preview also is common pattern in other developer-loved applications such as Github and some API management software.

Option B was more desirable considering that, especially in the context of discovery, users may find it helpful to quickly click through and preview multiple artifacts.

Click on a table item to open the preview panel, or click directly on the name of the artifact to go directly to the detail page.

The Design – Challenge #2


How might users interact with an artifact and its versions?

The main challenge was understanding the functionality and data represented by this view. Because it had a decent amount of complexity to figure out, I didn't jump right into the design. I started by doing a content audit and using diagrams to facilitate Q&As with stakeholders.

The diagrams helped me uncover a few more technical details, but I was having trouble using them to communicate with the engineering stakeholders and needed to transition into wireframing for more clarity.

There were still some unanswered questions after exploring the information architecture through diagrams.

Handling multiple active versions

The most important insight here was that the versioning of an artifact isn't necessarily linear. So the version that could be of interest to the user isn't always the most recent one, especially when there may be different versions for multiple projects/applications.

After discussing the priority of use cases with the stakeholders, we went with Option A. It would allow users to easily toggle between versions, which supports their ability to make comparisons and contextualize changes.

Now that the larger design decisions had been ironed out with the team, it was time to explore the details and some secondary use cases. I quickly created some more low/med-fi wireframes and sought another round of stakeholder feedback. Except this was right when the lead engineer, AKA the main stakeholder, suddenly went on PTO for 3 weeks. Oh no!

I usually want to validate my riskier ideas before going into higher-fidelity and visual design. Since that was no longer an option and time was running out, it was best to start fleshing out the design using the Patternfly design system components.

Behold, the first and quite inaccurate attempt

I went through multiple rounds of explorations and design reviews in the 3 weeks. Unfortunately when the lead engineer returned, we realized that many designs contradicted the back-end data model.

How did we finally work out the small, yet frustrating technicalities? Unconventional, but it worked: I got access to the API that powered the Service Registry and saw the functionality for myself. Being a computer science major certainly helped!

Iterate, iterate, iterate
The last iteration – many design decisions to reconsider, but it's ready for testing

Next steps


Usability testing

Once I wrapped up my iterations on the artifact details screen, it was time to identify the areas that were successful and those that needed more iteration from future designers on the product. I prepared a clickable prototype in Marvel and with the guidance of a user researcher, I drafted a 30-minute script that included a task analysis. The moderator was to record whether or not the participant had successfully completed each task and how difficult the participant thought it was.

We were not able to finish recruiting test participants before the end of my internship, although I was able to moderate an internal test run to find points in the script to edit. #testingthetest

And that's a wrap!

The solutions shared in the final presentations to the stakeholders (and Red Hat's UX org) were well received. I passed documentation to the designer who would be taking over the project after my internship, and we discussed areas for further focus. Beyond additional flows and edge cases, we noted refinements to be made in the UX copy and research to be done in investigating search & filtering behaviors.

Learnings


Strategies for facing challenges

  1. Tight timeline with limited research resources → Still spend as much effort as possible understanding user perspectives and contexts, then use strategies (e.g. wireframing using design system components) to rapidly get to a stage where assumptions in the designs can be validated.
  2. Ambiguous requirements → Identify MVP requirements upfront and for more nebulous areas, seek frequent feedback from stakeholders using tangible design explorations to push and pull on the boundaries of what is desirable.

Get ready for UX reality

This was my first industry experience in design and an important lesson in working with constraints in every sense: technical, time, and being remote. I will continue work on being adaptable and juidicious with where effort is spent.

The reality of the design process