Our team gets a lot of requests to share insight on our process. I wanted to share something our team is currently testing for a mobile eCommerce project we're working on:
These are some interactions we're prototyping during one of our design sprints to smooth out some of the friction points associated with the checkout process on mobile devices. We wanted to pay particular attention to some of the more mundane tasks that can often lead to user frustration and have a negative impact on conversion. https://dribbble.com/shots/2691615-Mobile-eCommerce-UX
We focused on scenarios like like reviewing your order, updating the cart and and removing items. The idea is to assess whether the ui patterns we are using will facilitate the completion of tasks that can lead to cart abandonment during a checkout process.
• Packages = orders coming from multiple sellers • 78% of orders contain at least 2 packages • 64% of orders contain at least 5 items in one of the
- I want to review 2 packages packages orders coming from multiple sellers
- I want to scan the contents of package 1 (to ensure accuracy and account for all items in the cart)
- I want to increase the quantity in one of the items in package 1
- I want to remove one of the items in package 1 from the shopping cart
For this initial part of the process we used Sketch to build some low fidelity wireframes. The plan is to test the low fidelity prototypes we sketched earlier in the sprint using Adobe AfterEffects to animate them. We then access and choose the most viable concepts and build them out as prototypes using Principle.
One of the things we like to do during this phase is to manipulate the fidelity to highlight certain details and subdue others. In this case, the items in the shopping cart are low fidelity to help the user focus on how effective the layout is for completing the tasks. We also used high fidelity in certain areas where we needed the content to provide context and ui elements to communicate things like hierarchy, visual priority and depth.
If you are wondering weather its worth the trouble to articulate these micro interactions to test our hypotheses you are not alone. However, while we mostly use InVision to build clickable prototypes, we also find that there is only so much we can do with InVision’s transitions when it comes to certain gestures on mobile devices.
Here are some of the most valuable thing we get out of building these animations:
- They help articulate things that otherwise missed when working with static prototypes
- They expose flaws that would not be quite as obvious without the context of the animation.
- The provide a more accurate depiction of time needed to complete the tasks