Python代写 | ITEC 1961 Assessment 3 and 4

本次美国代写主要为Python数据处理相关的assessment

Assessment 3 and 4 overarching outline:
The goal of these projects is the implementation of a full data analysis workflow using python with the combination of
SQLite database persistence.
You are asked to select a domain of interest to you. You may re-use some of the datasets from the previous assignment.
Research what kinds of data sources are available for your selected domain. Subsequently, you are asked to (1) formulate
questions that you would like answered, (2) acquire datasets from at least two different sources (at least one source must
be dynamic, i.e. is web-scraped or is retrieved from a web API), (3) wrangle the data into an usable format and perform
EDA, (4) integrate datasets into one, (5) persist the data into a SQLite relational database with a suitable schema, (6)
perform group-by queries, pivot tables, cross-tabulation of the data to answer your research questions, together with a rich
set of visualisations.
Links to various dataset and web API repositories are provided on Stream. The analysis workflow you are asked to
perform is illustrated in the diagram
below:
Data Acquisition Data Wrangling Data Integration Data Persistence Data Analysis
Static dataset Transform Concatenation SQLite Group-by
Web API Clean Merging csv Pivot tables
Web Scraping Impute DB Schema Cross-tabulation
Other(?) User-defined functions Visualisation
EDA
Visualisation
Assessment 4 Requirements:
Your research report must be in a Jupyter Notebook format and thus executable and repeatable. Clearly introduce your
problem domain, articulate your research questions and provide an executive summary at the beginning.
You must document and explain the reasoning behind the coding steps you are taking and provide explanations of all your
graphs and tables as is appropriate. Make sure you label all aspects of your graphs.
The activities listed under the five stages in the workflow diagram above are a guide only. This means that operations like
group-by statements as well as pivot tables could be a part of the ‘Data Wrangling’ phase as EDA, and not only a part of
the data analysis phase. Finally, please run your report through an external spell checker.

Assessment 3 Specific Requirements:
Once you have completed the above components, your task now is to design a database (DB) schema that represents all
the data that you have acquired from multiple sources in a normalised form, and to populate it using SQLite, thus
achieving full data persistence.

The project requirements are as follows:
– create a separate Jupyter Notebook for these tasks
– create a simple DB schema document that shows the tables (aim for around half a dozen), their attributes and
relationships that depict your design;
– create an image file from the schema DB design document and embed it into your notebook
– describe your DB schema at a high level
– write all the database schema code for creating the necessary tables for SQLite DBs
– read in all the data that you have prepared in the above project and which you have stored in various file formats
(.csv and/or .xlsx) and populate your tables from the notebook
– perform some analysis that requires extracting data from your DB; write at least six queries that require various
table joins on your DB; these queries can replicate or be based on some of the analysis that you performed in the
above project. You may also include some visualisations in the notebook.
– create at least two DB Views which encapsulated queries from above and test them