Please note that certain details have been omitted from this case study due to the confidential nature of the project.

This case study focuses on the phased redesign of a low-code tool, addressing core usability and navigation challenges identified through research.

Before/After

Key issues included poor navigation structure, confusing app builder workflows, and limited customization options, which hindered users from efficiently building complex applications.

With a limited budget, we presented a plan to stakeholders prioritizing small, high-impact changes, starting with navigation improvements and continuing with simplifying builder flows, and introducing simple but high impact customization features.

This iterative approach allowed us to gather continuous feedback and implement changes incrementally, improving the tool’s usability and user satisfaction while aligning with budget constraints.

User Research: Conducted user interviews and usability testing to gather insights, identifying pain points and usability issues to inform design decisions.

UI Design: Developed wireframes and prototypes based on research findings and conducted testing to refine and improve the proposed solutions.

Development Collaborator: Worked closely with developers to ensure design feasibility and facilitate the implementation of user-centered improvements throughout the project.

In the research phase, our goal was to gain an in-depth understanding of user interactions with the low-code tool by employing a structured approach.

We selected research methods that we considered more effective for uncovering user needs and challenges for this particural case, and participants with a range of skill levels and use cases to ensure our insights were comprehensive and relevant

Preparation involved creating detailed materials for user interviews and usability testing. After data collection, we synthesized the findings, mapped user journeys to identify pain points, and presented our insights to stakeholders, establishing a solid foundation for the redesign process.

Identify navigation and usability barriers
  • Understand where and why users got lost in the app, pinpointing specific navigation issues and identifying areas with confusing workflows.
Evaluate overall user satisfaction
  • Explore user perceptions of the tool to determine specific elements users consider unattractive or difficult to use, providing evidence-based improvements.
Assess needs for complex App Building
  • Investigate the requirements users have for creating complex applications to understand feature gaps, limitations, and which tools they find inadequate or frustrating.
 Identify workflow and productivity bottlenecks
  • Map user workflows to reveal any pain points or bottlenecks that hinder productivity, including how users navigate between tasks within the app.

User interviews and contexual inquiry

Why? To reveal frustrations that can’t be seen from metrics alone.

  • We created a semi-structured interview guide with open-ended questions to explore user experiences, expectations, and frustrations. For contextual inquiry, we identified key tasks to observe and prepared probing questions to clarify actions taken during those tasks, observing how the user customized a feature within the tool, noting where they hesitate, navigate back, or express confusion.

Usability Testing (Moderated Only)

Why? To reveal task-related pain points.

  • We developed realistic task scenarios for users to complete, focusing on areas where users tend to struggle based on stakeholder feedback. Instructions were written to be clear and neutral to avoid leading users, and we prepared prompts for asking follow-up questions based on their actions during the task.

Journey mapping and task analysis

Why? To identify where specific bottlenecks occurred

  • We mapped key user journeys by defining specific tasks users commonly undertake, breaking each into steps and identifying potential pain points for each step. We also prepared questions to prompt users to reflect on frustrations or delays they encounter along the way.

Heuristic Evaluation and Expert Review

Why? To prioritize usability fixes that stakeholders have brought to the table and to ensure we focused on issues users struggle with but may not verbalize directly.

  • We used a checklist based on established usability heuristics and categorized tool features to systematically evaluate each area. For each heuristic, we prepared an evaluation guide to ensure consistent assessment and documentation of any issues.

Established Skill Level Criteria

We identified each participant’s familiarity with low-code or similar platforms to cover a spectrum of experience levels. This approach allowed us to capture usability issues affecting both novice and advanced users.

Defined User Profiles Based on Primary Tasks and Tool Usage

We screened participants based on their goals, like building quick prototypes or complex applications, which helped us observe whether the tool effectively supported these diverse needs.

Conducted Pre-Testing Surveys or Interviews

We conducted brief pre-test interviews to confirm each participant aligned with our criteria, ensuring they had relevant experience and goals to provide valuable feedback for our research objectives.

1. Lack of depth and personalization

Issue: Users experience significant limitations when trying to build complex applications due to an over-simplified development process, lacking options for customization and depth.

Frequency & Severity: High frequency and critical severity.

User Impact: This issue causes delays and frustration for users aiming to create more advanced apps, as they’re unable to adjust settings or features to fit specific requirements.

Measurement: Tracked through support ticket analysis and interview notes documenting customization challenges.

  1. Example 1: During an interview, a user expressed frustration that they couldn’t add custom validation rules within form components, which limited their ability to create complex logic for data entry.
  2. Example 2: Some support tickets highlighted several user’s request for additional customization options within the workflow builder, noting that without this, they couldn’t adapt the tool to match their organization’s unique processes.
  3.  Example 3: In usability testing, one user attempted to add specific business rules but discovered the available options didn’t meet their needs, leading them to abandon the task in frustration.

2. Confusing App Builder flow

Issue: The drag-and-drop builder presents multiple ways to achieve similar results, leading to user confusion over the correct or best path to follow. It alfo feels cluttered, and difficult to scan.

Frequency & Severity: Moderate frequency with high severity, highlighted during usability tests where users struggled with redundant steps.

User Impact: Users spend extra time navigating redundant paths, reducing efficiency and leading to frustration when building applications.

Measurement: Recorded through user actions and feedback during moderated usability testing and contextual inquiry.

  1. Example 1: In usability testing, four users spent extra time navigating between different options to add a feature, unsure if they should start in the “Quick Add” or “Custom Flow” area, both of which led to similar results.
  2. Example 2: A contextual inquiry session revealed a user trying to configure an app’s layout. They mistakenly repeated several steps due to similar-looking options, only realizing later that their previous choices had already achieved the intended result.
  3.  Example 3: In an interview, a user described feeling uncertain when using the drag-and-drop editor, noting that the availability of multiple ways to reach the same outcome made the process feel cluttered and repetitive.

3. Confusing navigation

Issue: Users often get lost navigating the tool due to a complex, layered architecture that has evolved without clear planning

Frequency & Severity: High frequency with critical severity, as users reported consistent disorientation during usability tests and interviews.

User Impact: Users find it challenging to locate essential features, often wasting time or abandoning tasks due to navigation difficulties.

Measurement: Observed through user navigation breakdowns in usability testing, supported by interview feedback.

  1. Example 1: During usability testing, one participant repeatedly returned to the home screen while trying to locate the “Reports” feature, later commenting that they “felt lost” within the tool’s layout.
  2. Example 2: A user interviewed mentioned frequently having to backtrack to previous screens, explaining that the structure felt “random”, as if new sections had been added without any overarching organization.
  3.  Example 3: In another usability test, three participants took several attempts to find the “Settings” menu.

4. New vs. experienced user patterns

Issue: Experienced low-code users find the tool’s database configuration process unintuitive, despite it being easier for new users. This creates a steep learning curve for advanced users.

Frequency & Severity: Moderate frequency with moderate severity, particularly impacting experienced users noted in usability sessions.

User Impact: Experienced users find essential configuration processes counterintuitive, leading to errors and a negative initial experience with the tool.

Measurement: Documented through the frequency of user errors in database configuration during usability testing.

  1. Example 1: In usability testing, an experienced low-code user expressed frustration with the database configuration process, saying that they expected “standard patterns” but found the layout confusing and inconsistent.
  2. Example 2: An interview with a seasoned low-code developer revealed difficulty in creating relational links between tables, which they found unexpectedly challenging.
  3.  Example 3: Another experienced user in a usability test struggled to locate familiar options within the database configuration screen, commenting that the setup felt tailored to beginners and lacked the options they typically expect for complex configurations.

The low-code tool is currently failing to meet the needs of advanced users due to limited customization options, a cluttered app-building flow, and confusing navigation that creates frustration and inefficiency. Key challenges include a lack of depth in customization features, which restricts complex application development; a convoluted drag-and-drop builder that presents redundant paths, leading to user confusion; and a layered navigation structure that disorients users, often resulting in task abandonment. Additionally, while the tool’s database configuration process appears accessible for beginners, it does not align with the expectations of experienced low-code users, creating a steep learning curve and hampering productivity for advanced tasks.

In the design phase, our goal was to address the key issues identified during research by setting clear objectives and implementing targeted solutions:

1. Improve navigation and information architecture – CRITICAL

2. Streamline the App Builder flow and simplify design – CRITICAL

3. Expand customization and personalization options – HIGH

4. Redesign database configuration to accommodate experienced users – MODERATE

5. Enhance workflow customization capabilities – MODERATE

As a businnes request, we also needed to align the UI design with the new branding

Revised Information Architecture

We completely rethought the tool’s information structure, addressing previous disorganization caused by a lack of planning during development. This foundational change was straightforward to implement but had a significant positive impact on usability.

Redesigned the App Builder

We overhauled the app builder to reduce screen clutter and enhance usability. By incorporating micro-interactions, we minimized unnecessary visual elements and provided immediate feedback to user actions. We developed consistent building patterns across all widgets to ensure a cohesive experience. For example, while a list and a table have different customization options, the process to add data and configure settings now follows a similar flow. This consistency reduces the learning curve and improves efficiency.

Developed a UI customization option using Design Tokens and Figma’s API

Recognizing the demand from designers and developers, we implemented a feature that allows users to apply specific themes directly from Figma’s API, using Figma’s variables. This was very well received by our beta clients, as customising the UI usually took too much time from development.

Proposed future iterations

While we made substantial changes within our budget and timeline, we identified additional enhancements based on insights gathered during the research phase, laying the groundwork for ongoing improvements.

In redesigning the information architecture, we deconstructed the previous structure and rebuilt it to address critical issues, prioritizing efficiency and accessibility. One of our main goals was to centralize resources, making databases and widgets accessible across multiple apps. Previously, users had to connect each app to the same database repeatedly, which was both redundant and time-consuming. By creating a centralized Databases section in the main menu, users could now establish connections that could be used globally across all their apps, streamlining the setup process.

A major point of confusion in the previous architecture was the overlap between the platform’s main menu and each app’s individual menu. In the new design, we introduced a “Applications” section in the main menu. This section provided a clear, organized list of all user apps, allowing for easy access to individual projects. Once inside a selected app, users could manage their pages—editing, deleting, or creating new ones—as well as organize the modules that would make up the app’s main menu structure.

This separation kept the platform’s global navigation distinct from the in-app navigation, giving developers the freedom to build custom menus within each app without interference from the main menu structure.

Similarly, we enhanced accessibility to the modules and widgets Marketplace, placing it within the main menu. This allowed developers to find and configure widgets at a global level, so widgets could be readily accessible across all projects, reducing repetitive configuration work and saving valuable development time.

Micro-interactions within the App Builder played a crucial role in enhancing usability and creating a smoother, more intuitive experience for users. These small, focused interactions provided immediate feedback when users performed actions like dragging and dropping widgets, resizing elements, or adjusting configurations.

Additionally, micro-interactions minimized visual clutter, preventing the screen from becoming overwhelming by animating transitions, such as expanding or collapsing widget categories in the selection bar. These interactions also helped ensure that widgets within each slot responded smoothly to resizing or repositioning, reinforcing a sense of responsiveness and precision.

By incorporating micro-interactions, we were able to guide users seamlessly through complex processes, making the app-building experience more fluid and reducing friction during key tasks.

A first approach to the canvas
A second approach to the canvas…
The canvas wireframe…after many attemts and testing. We realised we had to drop out the floating pannels and stick to a more conveltional layout to help the user focus configure easily from the most simple to the most complex elements.
Example of data configuration for a dropdown widget.
Advanced configuration of entity relations for the GIS module
Advanced configuration of synthetic lists for the GIS module

For this case, our testing process was structured to evaluate the impact of design changes and identify opportunities for further improvement. We conducted moderated usability testing with goal-oriented tasks using Figma and coded wireframes to observe user interactions and gather in-depth feedback.

Participants were chosen to include both novice and advanced users, allowing us to capture a well-rounded perspective on the app builder’s usability.

Testing outcome
KPIHow we measuredFindingsExplanation
Time reduction in database and widget configurationTask completion timing, comparative analysis40% decrease in time spent on repetitive tasks such as connecting databases and configuring widgets.We timed users as they connected databases and configured widgets, comparing results pre- and post-redesign to gauge time saved due to global configuration improvements.
Error rate reduction in widget managementError tracking, usability testing sessions50% reduction in errors when managing widgets, improving ease and confidence in making adjustments.We tracked error rates during usability tests, logging mistakes and mis-clicks to identify reductions in errors due to micro-interaction refinements.
User satisfaction with navigation and menu separationPost-task surveys, qualitative feedback85% of users reported increased satisfaction with the redesigned navigation, finding it easier to manage app-specific menus.After completing tasks, we gathered feedback through surveys and interviews to assess satisfaction with the new navigation structure and separated menus.
Perception of advanced features and need for onboarding supportUser interviews, likert scale surveys70% of experienced users found the tool’s advanced features sufficient, while 60% of novice users noted onboarding challenges, suggesting a need for guided tutorials.We conducted in-depth interviews with experienced users for advanced feature feedback and used surveys with novice users to gauge their onboarding needs and initial ease of use.
Users found much easier to access databases and configure widgets globally, reducing repetitive tasks and saving time.
The separation of the main platform menu from each app’s custom menu eliminated confusion, allowing developers to manage their app-specific pages and modules without interference.
Micro-interactions showed a marked improvement in usability: error rates among testers dropped significantly when managing widgets, and task completion times were reduced to a half. However, a few micro-interactions needed further refinement to enhance fluidity.
The redesigned widget configuration was perceived as more advanced for experienced users who wanted to build more complex applications.
Integrating Figma’s API for UI customization was widely appreciated, as it minimized the back-and-forth between design and development.
A small group of experienced users requested even more advanced builder options for greater flexibility.
Novice users found it challenging to initiate and complete their first project using the tool. Exploring onboarding options, such as guided tutorials or step-by-step wizards, could significantly improve the initial user experience.
Before
After
Scroll to Top