You are on page 1of 6

Project Grid Reflection

William Jackson Northeastern University Jackson.W@husky.neu.edu


ABSTRACT

Louis Chin Northeastern University chin.l@husky.neu.edu


USERS

Nicolas Munoz Northeastern Universit Munoz.ni@husky.neu.edu

In this paper we discuss the design, implementation, and evaluation of a user interface meant to allow improved monitoring and analysis of household power consumption.
PROBLEM

Electricity represents a nontrivial portion of the average American's household expenditures. Good information on how and where this power consumption is occurring is however hard to find. While bills from the power company typically display some high-level statistics, nothing granular enough to map to individual rooms or outlets is provided. This lack of low-level information is problematic for consumers in two key ways. First, an inability to pinpoint inefficiencies (e.g. a child who leaves the television in their room running through the night) hampers the consumer's ability to cut waste in an informed fashion. Choices made with limited knowledge of the problem at hand are, after all, more likely to be bad ones, and significant effort can be wasted on changes that do not ultimately address anything. Second, the current difficulty of self-monitoring power consumption makes auditing a provider's claims problematic. Making this information both automatically tracked and available from the user's personal computer eases this task and facilitates the forging of a less one sided relationship with the local power company. To rectify these two issues, we created Grid. Grid is an application that consists of outlet-level overlays in the home, each uniquely identified and uploading data about power consumption through that outlet to some central source. The front end we constructed allows users to create a rough graphical 'blueprint' of their home or other smallish space, into which are placed the relevant outlets. Both real time and historical data are tracked; the former accessible via a view based on the aforementioned blueprint and the latter reached through provided logging and report generation functionality.

We envision the average Grid user as a home or small business owner, aged thirty to forty five, gender unimportant. Some technological proficiency is assumed; we do not think a user with no prior computer knowledge or tech savvy would be comfortable setting up the hardware associated with the project or using the somewhat complex front end. Users are likely both reasonably dedicated environmentalists and frugal. We further assume that our users are middle to upper middle class. The main points of differentiation between users will likely be along the cheap/environmentally conscious axis, with all users evidencing both characteristics but in varying degrees. We imagine that certain members of our base will be motivated first by a desire to save funds, with the associated environmental benefit being a happy extra. Other (and we think the majority of) users will be driven by a desire to live as 'green' a lifestyle as possible first and a desire to save money second. This makes sense when you consider that the ROI of our application will sometimes be very small or nonexistent, given that anyone who cares enough to wire up their entire home to monitor power data has likely already trimmed out most waste in their lifestyle. The simply thrifty user will, realizing this, oftentimes scoff, whereas the more ideologically motivated environmentalist user seems more likely to proceed in spite of this knowledge.
USER TASKS

To effectively use our application, we determined that users must be capable of executing the following tasks: Users need to be able to build a 'blueprint' of their space. The implementation of this must be robust enough to support the notion of multiple floors, and also must provide mechanisms for users to name rooms and outlets. Users must be able to view and consume real time power consumption information, organized at the room and outlet levels. Users must be able to configure and generate reports showing power consumption over a specified set of rooms/outlets for a specified period of time. Users must be able to access and view unprocessed log info about their historical power consumption.

DESIGN

The application is separated into a home screen and three other sections: Reports, Logs, and Account. The home screen contains a room view of the users current blueprint of their space that is supposed to be developed when they first use the application. The blueprint allows the user to click on an outlet, room, or floor and find the current output of the specified object. It also includes tabs that direct the user to those three sections mentioned earlier. The Reports section allows the user to create a report through a template or having the user create their own and saving them. Built in reports allow the user to find critical information on specific rooms and floors. If there are certain things the user doesnt need from default templates, they have the ability to create their own, and save them for later use. To create a report, the user can click the generate report button. A page to input the reports title and description will pop up. Once those are filled, there is a button on the bottom right that says Continue to Step 2. Step 2 gives the user options to choose the granularity level: outlets, rooms, or floors. Then selecting multiple objects of that level, a time span, they should be able to press the button on the button right that says Continue to Step 3, if something needs to be changed, there is a back button on the button left that brings the user back to the previous step. The next step involves confirming the confirmation of the settings chosen, like the past page; there is an option to go back. Unlike the other pages, the bottom right now has two options: save template and generate report. Clicking the save template, will save the template to the computer and clicking the generate report will generate the report with the options previously selected. Once the report it generated, it can be saved and will give the user an option to go back to the Templates page. The Logs section shows the user a graphical output of a time range that the user provides the application just below the navigation bar. The log is able to add objects that will show the user how much power the section or object is using during the time frame specified. They can be added and removed based on the users preferences. The Account section shows the username and password. Users will be able to save their account so that they may be able to retrieve their saved settings if an error were to occur with their computer, or if they wished to use the application elsewhere. The account also has the option of resetting the build space for the users blueprint representation.
DESIGN REVISIONS

we were able to develop a way to build the blueprint space representation with a drag and drop and resize interface. The testers for paper prototyping application found the home creation very easy and intuitive to use. However, we were unable to build this easily with our prototyping tool. Therefore, we had to create a static page so that users may understand the application to use for testing the rest of the application.

Figure 1: Build your space representation, later developed as a static page

The application has gone through many revisions throughout the development process with feedback given to us through heuristic evaluations, testing, and limitations within the prototyping application we were using. Our three main tests for our users during paper prototyping were: home representation creation test, report test, and log test. When creating our application through paper prototyping,

For the reports testing, testers were to generate a report and to create a new template. For the paper prototyping it was easy for them to generate a report, however, it was more difficult to create a new template because it was hard for the testers to understand that line items were to be checked off, as well as drop down menus associated with some of the options. Error messages should also have been included when something is not being fully filled. It was a little too cluttered for the testers to comprehend all the options of the

reports section, so it would later help us space out the options into different pages as shown in figure 2.

Figure 3: Log page

Figure 2: Report Details and Configuration

The log tests were to add/remove two objects to the chart, and to modify the date. This was slightly difficult for testers because they could not see the checkbox to add objects to the chart, some testers even tried to drag the sensor onto the chart, when clicking the checkbox wouldve sufficed. Testers were able to change the date easily. Many testers complained that the interface was quite cluttered, and was later given a better design to suit users needs, while giving the page a clean look.

After heuristic evaluations, we received very good feedback from our evaluators. Many evaluators seemed to like the navigating through the system. We did not have the option for the evaluator to build their own space, which was corrected after the evaluation to create a demo for testers to use to test the final representation of the application. They wanted to know what items lined up with which outlets, which were added after the first iteration. Documentation was also a problem for many users because there was no help button. Many people liked the simplicity, but would like to see more color from our application. All of the feedback we received were taken into careful consideration and were put into the application if it was possible for us to do. As seen from figures 4-6, we kept adding to our application while maintaining the simplicity that many of the users enjoyed.

- Figure: 6: Final version of home view

IMPLEMENTATION

Figure 4: Home view through paper prototyping

Our front-end prototype was created using a drag and drop prototyping tool called Axure. It features graphical support for drag and drop creation of application 'pages' using elements similar to those offered by HTML. While our program was envisioned as a desktop application, the page/form driven website design style supported by Axure largely matched our needs. The final application we created was quite simple. It consisted of multiple flat pages tied together with simple anchors. Where content needed to appear/disappear in an AJAX-ish fashion without page reload we utilized the Axure provided 'dynamic panel' widget. These dynamic panels can be set to have multiple states with varying displays, and by tying state changes to provided user inputs we could, for example, make the information frame associated with Outlet 1 on the home view page appear and disappear. The pagination of the report creation process was accomplished in the same fashion. Axure did not appear to support the notion of page partials or content inclusion across multiple pages. The Command-C-V pattern was therefore heavily utilized for elements that appeared in multiple places, meaning that changes to the header and logo were both tedious if not difficult.

Figure 5: Home view from first creation of application

IMPLEMENTATION PROBLEMS

The relative clunkiness of Axure's support for GUI-driven 'programming' led us to shy away from its scripting capabilities. Our prototype was, therefore, not terribly dynamic. The home view page for example should theoretically feature values that update themselves in real time. It does not. Initial attempts at creating mocked up reports that respected form inputs were found to be tenuous constructs with poor performance, and were scrapped in favor of static final outputs that did not respond to user

commands. This forced us to have very specific instruction sets for creating reports. Logs suffered much the same; an inability to create a logs page that responded intelligently to user input essentially forced us to mock out by hand a single set of steps creating an also hand-mocked final result. Issues with drag and drop support also necessitated a bit of a hack when testing users on their ability to build a blueprint of their space. Given that we could not actually create this functionality with the tool, we decided instead to show users a blueprint that was already completed, giving them the chance to see roughly what the process looked like before clicking the 'done' button. This also gave us the chance to display to the user a success message that provided valuable guidance about what the rest of the application did and how to access it.
EVALUATION METHODS

every action would add to our testing results and make them more accurate. We guided our test users in no way once the briefing was complete and simply took notes thereafter. We ensured them that none of their personal information would be handed out and that the test results would be completely anonymous and that the personal information would simply be viewed by us, the testers, in order to conduct an appropriate analysis. We gave each user three tasks to complete on their own, and afterwards we conducted a short survey and interview to get general responses to the applications design.
INCIDENTS

We did three sets of user testing to improve our interface throughout the design process. Our first user test was performed on multiple individuals using a paper prototype. Our second test was the heuristic evaluations submitted by our classmates. Our third and final test was user testing on a much larger scale with our nearly-finished product. Our third test was the most controlled, isolating the tester in an enclosed area of the library for 5-10 minutes. We observed and took notes of the users actions followed by survey regarding certain parts of our design which we thought may have been controversial, and then a short interview to receive general feedback about the product. The heuristic evaluations were the most useful one as it was being conducted by individuals with an eye for design features and knowledge of requirements for a user-friendly interface.
EVALUATION USER BASE

Throughout the user testing process we encountered several difficulties for user testing. Users with minimal computer background were overwhelmed by the amount of data on certain screens such as the logs page. They had issues with the task of adding certain outlets to the chart because at one point the buttons were not aligned correctly with the clickable region, something out of our control but we did our best to correct it in between our user tests and hinted them slightly when they were stuck thinking it wasnt working. During the paper prototyping sessions, we did have a chance to test our notion of drag and drop blueprint creation. While the low fidelity nature of the prototype does somewhat lessen the significance of these results, it should be noted that users grasped the concept quite quickly and were only limited in completion time of the task by the need to fiddle with many small pieces of paper. This was our only 'real' data collected on that portion of the application's usability, and that it seems to be acceptable is promising. The entire report creation flow changed significantly in response to data collected from our tests. The version presented to users in initial paper prototypes was, originally, thought of as a 'simplified' representation of the final product, which was not yet complete. When users struggled tremendously with the approximately two pages of configuration information required, however, it became clear that a change was in order. The amount of detail associated with setting up graphical output on a report was curtailed; in the final version of our prototype users can only select whether or not they want visual output, with the rest of the information about that visual output being inferred from their other selections. Inputs were cleaned up and simplified, some confusing inlined multi-value forms were removed in favor of drop-down menus or single text fields. Finally, the work on making reports more 'expressive' was simply never done. We initially wanted users to be able to toggle between multiple graph types that would all have differing configuration information and purposes; this was scrapped in favor of line graphs that always show power consumption over time at the granularity level selected by the user.

Although our target users were ideally in the ages of 30-50, we were unable to find willing participants in those ages and were forced to use students between the ages of 18-22; however, we made sure that the users we tested all lived off-campus and were paying their own electricity bills to ensure that they are conscious of their spending habits for energy. Both men and women were tested. Our testers had a variety of computer experience and we tested students from an assortment of colleges and majors. We would have liked to test an older demographic but because we were limited to students around campus, we got the most accurate results by not interviewing students that still lived in dorms and had no idea of their energy expenses.
EVALUATION PROCEDURE

We began paper-prototyping and our final evaluation procedure with a short briefing about who we were, the applications purpose and what we were trying to achieve with the testing process. We made sure to tell our testers to try and complete the tasks to the best of their ability and to not worry if they made a mistake or could not figure something out, since

USABILITY ISSUES

As has been mentioned, whether or not our notion of how blueprint creation ought to work may or may not be sound given the lack of data we have on the subject. The solution to this problem is, obviously, more data. Were we able to create a better prototype that didn't have the same limitations as our current one we would be able to perform proper testing that put the matter to rest. The current header navigation provided by the our application could be improved upon with the simple addition of working submenus, a feature that we did not discover until there was not time enough left to implement it properly. Reports, for example, could be made to feature a two item menu off the Reports section of the header showing templates and the report creation page.
REFLECTION

The set of features relevant to our application that we chose to prototype seems to have been reasonable; disregarding the account management functionality that would have to be associated with this tool eliminated boilerplate work that was both irrelevant to the real task at hand and, due to an abundance of easily copied examples in the wild, likely very easy to get 'right'. Both our paper-prototyping and final round of user testing went relatively well, and we have no particular comments on either save that finding appropriate users somehow would have greatly increased the value of our data. Our largest testing-related regret is that we were in fairly poor shape for the heuristic evaluation, with no styling yet implemented and several embarrassing bugs still present in our prototype. Were we to do this again we would make the effort to be at the ninety percent done point before the start of the heuristic evaluation to maximize the value of feedback.

The majority of the pain during this project came from working with Axure. While the tool does seem fine for mocking up conventional web-pages that do not featured complexity above and beyond, say, a search form or a shopping cart, our application had interactivity requirements (e.g drag and drop, multi-page forms) and display requirements (graph outputs, the dynamic home view page) that were simply not well supported. Were we do to this project again, we would almost certainly code it from scratch. Scripting turned out to be something that our prototype seriously needed to feel fully complete, and what Axure provided towards this end was not enough. This project has also been a valuable exercise in relearning the importance of version control. Work was often lost or confused by the passing of the entirety of the prototype from one user to another, with the receiving user expected to correctly merge the relevant files and maintain the relevant changes. This is another way in which coding the prototype would have been superior. The entirety of the source could have been thrown onto a github repository and accessed safely by all without serious issue. Besides these tool-related problems, we were largely happy.

You might also like