App代写 | ASSIGNMENT 2: EVALUATION PROJECT AND REPORT

本次英国代写是关于App评估改进报告相关的一个Assignment

GENERAL INFORMATION

Submission: Canvas Turnitin submission

Assignment weighting: 40% of overall module marks

Word limit: 2500 words (going 10% over the limit is fine; tables, figure captions, and appendices do not count towards the word limit; please note that this is the maximum word count, however, it is absolutely fine to write significantly fewer words, e.g. 1250)

INSTRUCTIONS

For this assignment, you should carry out an evaluation of an existing app (mobile or desktop app, etc.). This should be a task-based app, i.e. in which a user needs to perform one long task (up to 30 minutes) or a set of short tasks (that may take up to 30 minutes in total). Consider looking into open-source software alternatives (e.g. https://www.osalt.com/).

To evaluate the app, you must use an expert evaluation technique (i.e. Nielsen’s heuristics). The evaluation will involve collecting data, drawing up suggestions for design improvements based on your findings, and submitting a report which summarises your research process (i.e. how you conducted the expert evaluation) and results (i.e. what problems you found and how you propose to fix them).

You must conduct this evaluation with yourself and two further people (must be your classmates) acting as experts.

The overall objective of the evaluation is to come up with suggestions on how to improve/enhance the usability or experience of the chosen app. You do not need to implement these improvements. You need to identify them in the expert evaluation and describe them clearly in your report. For each issue identified, you need to explain why it is a problem (i.e. justifying it based on the violation of Nielsen’s heuristics) and state what needs to be changed in the app to solve it.

CHOICE OF APP

You can choose any app that you like, with the following constraints:

1. The experts whom you invite to test the app of your choice should not be required to create an account with this app (and provide any personal information) just for the purpose of this evaluation (unless they want to do so) and they should not be required to pay anything, create a subscription, or provide any credit card/bank details. You can tackle this requirement by creating a dummy account for your invited experts.

2. The app should be single user (or at least have a single user mode).

3. It is possible to use the app intensively over a short period of time (e.g. 30 minutes), rather being used intermittently over a long period of time so that an evaluator can observe its use (i.e. don’t test an app that would require someone to use it daily for several days or weeks).

4. When completing the tasks, the app should not require personal information from the users (e.g. if you choose a journal app, you can ask users to navigate to various parts of the app, try different functionalities but do not ask them to, say, write a personal account of something that happened to them in the past).

The app can be designed for any platform, including desktop, laptop, tablet or smartphone. However, in choosing an appropriate app, please also consider the following points:

1. General hardware/operating system constraints. Since you will be testing on your fellow students, try to choose an app which would be usable/testable by most people, rather than one which requires, say, a particular combination of model of smartphone and/or operating system. You can ignore this if an invited expert will be using your device.

2. If you will be communicating with the invited experts remotely, think about the practicalities of doing so (e.g. it would be quite straightforward to ask an expert to test a desktop/laptop-based app while using Zoom and sharing their screen. Other configurations may require a bit more thought).

If you are in any doubt whether your chosen app is suitable, please discuss it with your seminar tutor.

PROJECT TASKS AND METHOD

Key project tasks:

• Review three apps which are similar to the one you chose to evaluate, reflecting on their pros and cons compared to the app of your choice, and explain why you think your choice will reveal some usability or experience issues.

• Explain how you used the expert evaluation technique in your project.

• Carry out the evaluation and collect data.

• Analyse the data collected and highlight the key implications for redesign (suggest improvements for the app, i.e. what would need to be changed in the future edition of this app so that it becomes easier to use and/or so that it delivers a better experience?). You should describe the improvement suggestions in the form of text, but you may also find that adding figures to illustrate your proposed improvements makes them clearer to the reader. You do not need to test these improvement suggestions (i.e. you are not required to prototype and test them).

• Write your report based on the above points complying with the word limit (see “General Information” above).

PROJECT REPORT

Reports must include:

• Introduction & Background (~25% of the report): a mini app review, reviewing the minimum of three apps which are similar to the one you chose to evaluate. In this section, explain why you think the app of your choice will reveal some usability or experience issues.

• Method (~15% of the report): procedure you have followed, what data was collected and how, how it was analysed.

• Findings & Discussion (~50% of the report): presentation of the results, produce a set of detailed design improvement suggestions, including a discussion of how the results inform a better design of the chosen app.

• Conclusions & References (~10% of the report): summarise your evaluation, describe limitations of the current method, and present the future work directions. Also, please include a correctly formatted full list of references. You are welcome to use a referencing style of your choice, but please make sure it remains consistent throughout the report. You should reference all the apps that you describe, as well as the guidance on Nielsen’s heuristics.

MARKING CRITERIA

70-100% – Excellent report and evaluation. The explanation on why the chosen app may reveal some issues is excellent, very clear and backed up by the background research. The method is used effectively. Findings are presented clearly, and a detailed analysis has been carried out. Some highly relevant and appropriate implications for re-design are discussed, and limitations are discussed in a way which shows reflection. References are used effectively to back up the method used and background to project.

60-69% – Very good report and evaluation. The explanation on why the chosen app may reveal some issues is generally clear. The method is used appropriately, and there is some rigour and detail in the way it is used. Some relevant findings are presented, and an appropriate analysis has been carried out on the data collected. Some convincing re-design implications are drawn from the analysis, and there is a discussion of the limitations of the work carried out. Sources are correctly referenced.

50-59% – Fair report and evaluation. There is some attempt to explain why the chosen app may reveal some issues, but this may be unclear. The evaluation method may not have been used correctly or in sufficient depth. Some findings are presented, but these may lack detail. There is some effort to analyse the data collected, but there are problems with the way this is described. Re-design implications are discussed, but they may be too brief or may not match the findings and data presented. There is little reflection on the limitations of the project. There may be missing references.

40-49% – Poor report and evaluation. There is little discussion on why the chosen app may reveal some issues, or the discussion is very unclear. There are major problems with the use of the evaluation method. There are some findings presented, but these do not match the explanation of the method or are very brief. There may be no evidence of analysis, or a very poor analysis which is not well explained. Re-design implications are either missing or not backed up by any of the findings or analysis. There are major problems with the referencing.

30-39% – Very poor report and evaluation. Key elements of the project are missing or only covered very briefly. There is no convincing overview of what the evaluation set out to achieve, and the method used is not presented in any detail. There is little to no analysis, and findings are unconvincing, or not discussed. Re-design implications are not presented in any meaningful way. There are major problems with the referencing.

Below 30% – Report has very little content of relevance.