g3-userTesting

 =Contextualization= Index | Envision | Prototyping | Phase 3: User Testing | Refined Prototype toc

Project Brief
We are graphic design students from the York/Sheridan Program in Design who are working on a website redesign for the Centre for Contemporary Canadian Art or CCCA's Canadian Art Database. Our focus is to develop an easier way of accessing and presenting information that would cater to the needs of potential visitors and better their online experience. Our goal is to make the database a platform that would facilitate learning, exploration, and discovery of art.

First of all we would like to thank you for your participation. As a way of testing the website, we are going to show you a prototype, or a mock-up, of what we are currently working on for the Canadian Art Database. We will be asking you to read certain tasks which you will attempt to perform on the website. For this purpose, we encourage you to speak out loud your thoughts and actions during the test so that we may understand your thinking process. Please be assured that we are not testing your abilities, but that you are helping us test the website so that we may look for ways in which we can improve it.

If you think you have completed the task, or are unable to do so, please let us know by saying you are done. At this point, we may be asking you questions that would help us further understand your experience. Please be advised that during the test we will, as much as possible, refrain from helping you accomplish the task and rather allow you to do it on your own as this would help determine the effectivity of the design.

We would also like you to keep in mind that the website is still in its construction phase and that if you ever come across a link that is not working, you will be notified and directed to proceed with the test. If you feel in anyway uncomfortable in taking this test, please be aware that you have the right to stop the test at anytime and choose not to participate with no questions asked.

Again, we would like to thank you for helping us evaluate our website. Should you have any questions or concerns, please feel free to let us know.

Back to Top ^

DECIDE Framework
Annmarie | Joanna | Edeline


 * ====Annmarie[[image:blank.jpg width="162" height="51"]]====

|| //Determine overall goals of the evaluation// The purpose of this study is to pinpoint weaknesses of the prototype website concerning usability, information architecture, and visual effectiveness of the interface. It is imperative that the site be intuitive and well organized to facilitate an effective learning and browsing experience. As well, the visual design should be unobtrusive to the main focus of the site—the content.

//Explore the specific questions to be answered//
 * Is ‘drag and drop’ functionality an appropriate and effective way to collect images?
 * Since ‘drag and drop’ is not readily used in websites, users may be unfamiliar with the unique function and may experience some difficulty in learning the process of saving images. We want to ensure that users can adapt to the interface as well as evaluate the appropriateness of the drag and drop application for the proposed purpose//.//
 * Is the visual design of the site appropriate to subject matter and audience?
 * We want to verify that the visual design of the site allows users to focus on the content, as well as gather responses on what they expect the site should look like. It would also be worthwhile to explore the balance between the visual identity of the ccca and practicality as a subtle interface.
 * How easy is it to find specific information?
 * We want to determine if the categories on the site are inline with the user’s mental model in order for information to be located with ease. The effectiveness of hierarchy and layout in contribution with this task will also be examined.

//Choose the evaluation paradigm and techniques to answer questions// A structured method of testing would be ideal for the study since our project is fairly specific in the number of functions it can perform. As well, many of the links are not active and some of the interactive aspects are not functioning at a level that would facilitate free action.

//Identify the practical issues that must be addressed// The testing environment should be in a quiet room with a computer. Firefox is the preferred browser and javascript must be enabled to ensure the site works as intended.

//Decide how to deal with the ethical issues// Ethical issues concerning the collection of confidential information from users may arise. To address this matter, a consent form verifying the tester’s agreement of participation will be presented prior to testing. It will be made known that information gathered will only be used for the purposes of our study and that they have the right to terminate their participation at any time.

//Evaluate, interpret and present the data// We will identify and analyze any similarities and difference in the data, taking into consideration their expectations and familiarity with the subject matter. Results will be prioritized in order of importance, which will inform further changes.

//User satisfaction interview// Back to Top ^
 * What do you think about the visual design of the interface?
 * Would you use this site? For what?
 * Did you enjoy interacting with the system?
 * What did you like about the system?
 * What didn’t you like about the system?
 * What do you suggest in order to improve the site? ||

This user test is designed to test the strengths and weaknesses of our proposed system for the CCCA in terms of ease of use and clarity. The site must be easy to use so the user does not get frustrated with the interface and lose interest in the site. Also, we would like to see if the user truly benefits from the different tools we provide them for engaging with the material. While we may feel that the tools we provided will benefit the user, we do not know if the user would find it useful for themselves.
 * ====Joanna[[image:blank.jpg width="162" height="51"]]==== || //Determine overall goals of the evaluation//

//Explore the questions that need to be answered// • Does the interface of the site allow for clear navigation? • How useful are the features that we have given users to help with using the site? Are they helpful or is it a waste of space? • Are the features clearly presented on the site and easy to spot?

//Choose the evaluation paradigm and techniques to answer questions// The user will be given a specific task that revolves them having to navigate through the various pages of the site as well as using the contextualizer and other methods to find related artworks.

//Identify the practical issues that must be addressed// We must ensure that our site works with the testing computer that we would be using and that there will not be any major problems around it. Also, we must make sure that users understand the instructions given to them while not leading them through in any way possible to avoid skewing test results.

//Decide how to deal with the ethical issues// In order to minimize the amount of ethical issues that may arise from our user testing, we have provided a consent form to the user to sign saying that they agree to our user testing. However, they have full rights to terminate the test at any time and we must abide to that decision.

//Evaluate interpret and present the data// We will look at the different opinions expressed by our test users to find the major similarities and differences amongst them. The results will be categorized either a strength or a weakness of the system. We will also look at suggestions to the site given to us by the users and see which ones are feasible to implement on the site. || Back to Top ^

The primary goal of this test is to observe and evaluate the usability of the proposed design solution for the CCCA Canadian Art Database. We want to confirm which areas of the website are working successfully (or as intended) and discover areas that have errors or need improvement. Specific goals include the following:
 * ====Edeline[[image:blank.jpg width="162" height="51"]]==== || //Determine overall goals of the evaluation//


 * **Efficiency:** We want to know whether the structure of the system is clear, accurate, appropriate, and coherent. We want to determine that the current design enables the users to get to the information they want with the least amount of time/clicks.
 * **User satisfaction:** We want to ensure that the identified needs for the website (along with the corresponding features) matches the user's actual needs. We want the website to be a useful tool that not only meets the user's intended purpose, but also one that is engaging and enhances their overall experience of interacting with the system.
 * **Learnability:** We want to check whether the website is successful in providing relevant information about artworks, artists, and their interrelationships; and whether or not those information/the presentation of those information (i.e. drag and drop) are easily accessed and understood by the user.


 * → Usability Goal for the "Curator":** We want to determine whether the Contextualizer is the most intuitive way of finding interrelations between artworks or artists. We also want to know whether the users would find it relevant, helpful, or interesting, by either looking at it / considering clicking on it while performing the task. We want to test whether the users will prefer clicking on the Flickr-inspired "relevant artworks & artists" list, or whether or not they would look for metatags instead. Would they immediately consider the Contextualizer as metatags, or would having a brief description beside it take away from its tagging nature?

//Explore the questions that need to be answered//
 * Is the interface intuitive (enough)? Did we provide enough elements to hint them of certain functions available on the website? Did they use the drag-and-drop function with ease? How long did they take to figure out the drag-and-drop system (that you can drag artworks and timelines and do a comparison with more than two items)? Is it user-friendly enough, or were they intimidated by it?
 * Is this website an improvement to the old CCCA Canadian Art Database? Did it solve the (some/all) problems Bill Kirby highlighted with the previous system?
 * Is the structure of the website easily understandable? Is it clear, and does it make sense?
 * Do the users find the contextualizer interesting and/or helpful? Is it necessary?
 * What is their overall impression before and after visiting the website? Was their experience stressful, or did they find it enjoyable/pleasing? Would they visit the website again? Would they recommend the website to other people?

//Choose the evaluation paradigm and techniques to answer questions// As a way to effectively measure the progress towards our usability goals, we are to perform usability tests with 3 different people (i.e. curator/art history major, student, and a casual viewer/one who is not studying art but has some interest in it), particularly those we have specifically identified as target audience for our focus. All three participants are to be asked to perform the same sets of predefined tasks, and are to be evaluated in terms of (a) how different the behaviors/expectations of the audience from each other, (b) which features appeal to which audience, and (c) whether or not there is a pattern as to how successful/unsuccessful each of them performed specific tasks. Such test with a structured format (i.e. controlled) would give a more specific result that would be beneficial in directly identifying system flaws and in quantifying the website's capacity in meeting its intended purpose.

//Identify the practical issues that must be addressed// The testing environment should be in a quiet place that would make the participant(s) feel comfortable. There should be a computer or laptop available (preferrably Mac) and would have the browser that the prototype has been tested on prior the user-testing. One concern is the assumption that the participants are technology-savvy, or that they would be familiar in working in a Mac environment as opposed to PC's. The testing //might be// video recorded, if the participant allows it. At least two people from the group (us) must be present for the user testing to be valid: one would be talking to the participant while the other would be taking notes and observing.

//Decide how to deal with the ethical issues// Because the methodology includes testing with people, we must ensure that before the test, we ask/require the participants to sign a consent form that outlines the following keypoints:
 * their right to not participate, or terminate the participation at any time without prejudice
 * their right for anonymity and confidentiality
 * their permission and/or knowledge that they may be recorded on tape/video/email for the research
 * their right to ask questions, and to know who to contact (i.e. instructor) should they have further questions about the research or their rights
 * risks and benefits

//Evaluate, interpret and present the data// As mentioned in one of the previous points, having a critical eye for patterns is key to interpreting the data gathered. As a group, we will collectively evaluate the user testing experience: What could be improved? Did it match our expectations? Did we find anything surprising? Were our questions or tasks effective? How were the participants behaving differently, and what could have been the reason? After the evaluation, we will arrange the feedback by priority (i.e. which feature did most people have a problem with and discuss ways as to how we could improve it). ||

Back to Top ^

Test Scripts
> > Your task is to find a potential theme for your exhibition and discover two other artists that you might look further into. You wish to have //Theory on Separation// (1970) bookmarked, as well as the two other artists' timelines so you can compare all three side-by-side. > > When you feel you have completed this task, please say so. || Back to Top ^
 * ====Annmarie[[image:blank.jpg width="162" height="51"]]==== || * You are a **student** researching painters in Ontario. There is an artist with the initials P.A. that you would like to look up. Your task is to find his or her most recent work.
 * Your next task is to save the image and the entire timeline. Compare the two. ||
 * ====Joanna[[image:blank.jpg width="162" height="51"]]==== || * You are a **casual viewer** who saw a painting (show painting) in a magazine and wish to know more about it by visiting the CCCA. You also am interested in looking for other pieces of artwork that is related. (say by the colour green) ||
 * ====Edeline[[image:blank.jpg width="162" height="51"]]==== || * You are a **curator** for a small museum and are considering of putting together an exhibition that would include the piece //Theory on Separation// (1970). You are still unsure of which other artists you will include in this exhibition but are certain that the exhibition theme will relate to this piece in context. You decided to visit the CCCA Canadian Art Database to look for inspiration.

User Test Report
Annmarie | Joanna | Edeline  After the initial briefing and formalities, the testers were asked to complete various tasks in the website while two of three group members took notes. With multiple note-takers, there is a better chance that there will be no missed information with different views and realizations. Thoughts spoken out loud, mouse clicks in the right and wrong spots, time it took to complete a task, as well as verbal feedback were all observed to identify problematic areas of the site.
 * ====Annmarie[[image:blank.jpg width="162" height="51"]]==== || **Testing procedure**

The first test familiarized the testers with the interface and various functions of the site. The biggest difficulty that all of the users encountered was the drag and drop. Testers for the most part didn’t realized that items could be draggable and when they did, they determined what was draggable by trail and error. This was expected since drag and drop was an unusual way to save and manage images.

It was found that the testers generally enjoyed the interactions and visual design of the site. They thought that the main page, related links, and contextualizer offered a great way to browse and learn about artwork. They also found categories and search bar with autocomplete helpful in finding the required information. The tasks involving searches were generally completed with ease. Furthermore, the fact that multiple options can be selected in the categories may not have been obvious at first, but was generally understood after the link highlighted instead of leading to a different page.
 * Findings**

In addition, all of the testers were unable to drag and drop the artist’s timeline to the comparison panel. The icon to drag the timeline was too discrete and was not explicit enough. As a result, the testers were trying to drag the timeline title or the header bars of the accordion. The timeline would need to be improved in visuals as well as with written cues in order to make it visually appear as one unit and also to signify the dragging capability.

The testing reaffirmed concerns regarding the usability and design of the site, as well as uncover other issues that could be addressed. Testers offered valuable feedback in terms of how to improve the site. In regards to the actual tests, consistency across the different tests would have been ideal as to not confuse the tester and should be kept in mind for further tests. In addition, the individuals often forgot to speak out loud, though the movement of their mouse (whether it be circling areas or quick and direct) somewhat indicated their thoughts. For next time, it would be best to remind the user to speak out loud their thoughts or even just describing what they are doing to start them off. ||
 * Reflection**

 I believe that things went well with the user testing. I expected that participants would not follow the designed route therefore I tried to make a test that would have breathing room and allowed for several possibilities to arrive to the same goal. Although at the end the ideal route for my test is still rather linear, I was able to still gain results from the test if the participant deviated from the intended path. The main challenge was that most participants looked for related artworks in ways that I did not expect them to do so, for example, one participant used categories to look it up after reading about the artwork from the contextualizer instead of clicking on a keyword.
 * ====Joanna[[image:blank.jpg width="162" height="51"]]==== || **Testing Procedure**

Overall, participants liked the redesign of the CCCA website. The biggest problem we found was that drag and drop was not noticable enough for people to use, it was only after exhausting most options before most participants found that images were draggable. The compare panel where they are able to drag their images to is also too subtle for them to notice unless they look really hard. Also, keywords on the contextualizer might not be different enough to make them look clickable, as only one participant figured out that you can click on the keywords.
 * Findings**

The process was very helpful to go through. It provided new insights to our design that we could not have seen since we worked so closely with the site. The biggest problem in regards to conducting the user test and one thing that I learned is that consistency is a very important factor, especially when using the same participant for all three of the test. Since each of the group members needed their own pages to make their test work, each of us created our own branch of the prototype that was unique to our test and it caused some of the pages to look slightly different amongst the three test. Even with the small differences, all our participants were thrown off by the different looking pages and possibly made made the results different. For example, the first test had working categories while the other two did not. After going through the first test, participants were wired to believe that the categories worked and tried using them on the remaining two tests only to find them non-functioning. Likewise, the search bar did not work for the first two tests but did for the last one and therefore, the participants did not go to the search bar first because they thought it did not work even though it did work for the third test.
 * Reflection**

Due to this factor, the one thing I would change for next time is to keep everything consistent and use only one version of the prototype for all the tests, even though different paths need to be taken with each test. I believe that in doing so, our test results will not be skewed by this factor and be more accurate to what the participant truly feels about navigating through the site, instead of having opinions about navigating through three different versions of the prototype. ||


 * ====Edeline[[image:file/view/blank.jpg width="162" height="51"]]====

[[image:blank.jpg width="162" height="51"]]
|| **Testing procedure** User test was documented by note-taking. We had a criteria that all of three of us must be present during the test to allow for two note-takers and one who would be conducting the test. We first introduced ourselves and got acquainted with the person so they would feel comfortable. As we had already given them a brief summary of what the project is about and what they will be asked to do prior (i.e. during first contact), they were asked upon arrival to sign the consent form; then, we proceeded to test. One of us would read to the participant the project brief aloud, giving them some background of what the system is about, purpose of and instructions for the tests, as well as their rights and roles as participants. Our test was set up in three parts, each geared for our system's intended audience/users (i.e. curators, casual viewers, students). We started first with what we evaluated as the easiest test as a way of getting the user familiarized with the system. We then ended with what we believed is the most challenging one. The written task was initially read aloud to the participant, and consequently given to them as a copy should they need to refer back to it for details. The participants then will be allowed to try to accomplish the task given. Once one part of the test is done, the next task will be read aloud by the next group member; the same procedure above is repeated until all three parts are completed. Once all parts are completed, the participants would be asked about their overall impression and experience of the system.

//What actually occurred during the testing:// We weren't able to get our specified audience, but managed to find something close: The participants found it hard to voice out their thoughts. They started out able to do this, but as the test progressed, it became harder to multitask when they are too busy trying to figure the system out.
 * 1) Art & Art History major for Curator
 * 2) Illustrator for Student
 * 3) Design student for Casual Viewer. (This was due to time constraints.)

//Expectations met:// Generally, I find the test successful. The participants were comfortable with us/process and they were able to give us useful feedback as to how we could improve the system, as well as things they appreciated / they thought works well. There were a couple of times when they were unable to complete a task, or did not know if they have completed a task, or tried to complete a task but the "faked" nature of the prototype resulted to a misdirected result. This probably made the test even more challenging.

//Challenges / surprises:// Each of us developed our tests (i.e. part of the system concerning our tests) individually. This caused a lot of inconsistencies in the system, and it proved to be a hindrance sometimes for the participant to successfully accomplish the task.

Here is a summary of my notes from the user tests. > User suggestion: For the search field, hitting enter should take them to the results page (in addition to the Go button functionality). Also, search area should be clear and not translucent. > User suggestion: Perhaps it needs some sort of border so it would feel more as an object. > User suggestion: An indicator that they can save the entire timeline on hover might make it more noticeable. > User suggestion: panning of homepage to display more images (already planned out for our concept but left out for prototype.) > User suggestion: "Add to Comparison Panel" button instead might make the process easier. Clicking might require less effort than dragging, hence easier to use.
 * Findings**
 * The wording of the tasks were a bit confusing. Users were unsure how they would "save" an image, whether or not they would do it on the site or onto their local computers (i.e. right click, save as).
 * Search bar is first thing they go to. Autocomplete is appreciated by all users.
 * One of them appreciated that the navigation is all in one corner. ("Only one corner to worry about as opposed to spread up top.")
 * Contextualizer and related artwork/artists is appreciated by all users.
 * Filtering system of the categories panel is not clear. They only clicked on one link, and hit search right away.
 * Drag and drop is not intuitive enough.
 * Compare bar is not obvious enough. It felt a bit disintegrated from the system, and took the users a while to find it.
 * Comparison mode seems too hidden.
 * Dragging of the timeline to the compare bar is confusing. Lacks visual cues.
 * Not clear that the images on the homepage is clickable.
 * User suggestion: It would be nice if certain elements of an artwork (e.g. typography on image) were tagged.
 * Hyperlinks to a register page, instead of just saying "Hello Guest!"
 * Homepage might be a bit too much. Navigation tends to get lost. Perhaps enable dimming the background (e.g. Lightbox effect). The popping-out effect is appreciated by the user. Some users however commented that they really liked how it really exposes the work.
 * Results page: images are unclickable. (This was expected, as images need to be not clickable in order to be draggable.)

Overall, the participants really liked the design; especially since one of them (Art & Art History major) have used the old CCCA website before and was able to compare the experience.

//How did you find this process:// Based on the videos we've seen in class, I thought user-testing would feel a bit awkward. However, I was quite surprised (in a good way) that it was not as awkward as I thought it would be. The participants seemed comfortable as the test progressed and their participation definitely helped us pinpoint certain details on our system that were overlooked, assumed to be effective but proved difficult to use, or needs to be improved/changed. They were also generous enough to give specific suggestions as to how they expect the web site to function.
 * Reflection**

//Learning outcomes:// Consistency is important even at prototyping stage. Although the participants were carefully selected with different backgrounds, the results were fairly consistent (e.g. drag and drop were confusing for all). The only difference between all three was the amount of time it took for them to take the test. (The designer, for example, took the test for only 15-20 minutes as opposed to 30.) The problems were fairly similar across all three, in all three tests.

//Would you change anything next time://
 * 1) The wording of the tasks. Perhaps we should have done a pilot test prior the actual user testing.
 * 2) Consistency of the system by integrating all 3 together as one unit instead of three.
 * 3) Have them read the task out loud, instead of us doing it. ||

Testing Observation Notes

Back to Top ^

New Requirements
Back to Top ^
 * **Issue #** || **Issue Priority** || **Issue** || **Recommendation** ||
 * 1 || high || Drag-drop is confusing, esp. timeline. || Add more visual cues. (Written description of that function on hover, border around the entire timeline) ||
 * 2 || medium || Hitting "Enter" key doesn't work in search field. || Fix code to make it work. ||
 * 3 || low || Navigation is overshadowed/needs more visual impact. || Still keep it on corner, but have it entirely on white background. ||
 * 4 || medium || Compare bar is not obvious enough. || Make it bigger; have an illustrated description of what users can do on the site. Have it open all the time? or at first visit? ||
 * 5 || low || Rotating set of images on homepage. || Add 2-3 sets of "homepage images" to system. Or fix code to have the images randomize / pan to other images. ||
 * 6 || high || Image on results page are not clickable. || Figure out a way to make it clickable even with drag/drop code. Otherwise, rethink of adding a "Add to comparion panel" button. Clicking might require less effort than dragging, hence easier to use. ||