Python代写|COMP2671 Bias in Artificial Intelligence Coursework Assignment

这是一篇美国的人工智能+报告python代写

In this coursework, You will be analysing a human-centric 1  dataset and develop a fair machine learning ecosystem to detect, reduce and eventually mitigate different types of bias that exist in the final outcome of the algorithm in various ways.

This coursework involves two parts: (1) a 3–page project report (2) code implementation, discussed in the following two sections accordingly. The deadline for submission is on May 5th, 2022. You will submit all the deliverables in a single compressed file (preferably .zip format). If there are any queries regarding this coursework, please do not hesitate to contact me: ehsan.toreini@durham.ac.uk

In this project, you are responsible to design a automated system to help the HR department of an ACME organisation. This system shortlists applicants for interview and proposes a decision on whether or not an specific candidate should be given an offer. Typically, this system will be a classifier that is trained based on the historic data available through previous experiences of the hiring committees in the organisation. As the ML engineer, you are given the dataset, now you are responsible to design a non-biased system (unlike what Amazon designed in 2016 which led to a publicity scandal and eventually, Amazon decided the decease the system 2).

Please note that the implementation and outcome of each task should be clearly separated in your submissi- on (in both project report and source code). You will submit your implementation in a Jupyter notebook (ipynb format) so clearly segment your notebook and make sure it is in a presentable format. Also, the code should be sufficiently commented and obviously, should be your own implementationNote: the implementation of the method should be yours. You cannot use any “fair AI” python package in your implementation, i.e. IBM AIF360 or similar packages introduced in the class, you can compare your results with such systems offline to make sure your implementation works correctly though.

You should submit the answers to the questions proposed here in a separate PDF file in your submission. The style of the analysis should be technical, rather than verbose. This should be understandable by someone with a good knowledge of the bias mitigation techniques. Be concise and straight to the point. Make sure your answer to these questions do not exceed 3 A4 pages, including the citations.

Download the dataset from the Dataset folder on blackboard. The description of the dataset can be also found in the same folder. Read the relevant documentations and answers the following tasks in your project report document (in the document, clearly specify the answers for each task). Include the implementation tasks in your Jupyter notebook. The data set that we use is recruitment.xls 3. The applicant data set includes the following information within nine variables:

As the variable and value labels indicate, the data set indicates the gender (‘Gender’ – variable 2) of each person that sent in an application for the graduate job as well as whether or not they were Black, Asian or Minority Ethnic (‘BAMEyn’ – variable 3). Importantly the ‘ShortlistedNY’ variable indicates whether, after an initial review of their application, they were considered to be an appropriate candidate for interview (in other words, considered potentially employable). The ‘Interviewed’ variable indicates whether they were interviewed or not, the ‘FemaleONPanel’ variable indicates whether there was a female interviewer included on the interview panel. Then a key variable here is whether the applicant was offered a job or not (‘OfferNY’) and the ‘AcceptNY’ variable indicates whether they accepted the offer. Finally the ‘JoinYN’ variable indicates whether the applicant joined the organization.