Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ekoch/fix lint #37

Closed
wants to merge 12 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions .github/workflows/push.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Python Package using Conda

on: [push]

jobs:
build-linux:
runs-on: ubuntu-latest
strategy:
max-parallel: 5

steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: '3.10'
- name: Add conda to system path
run: |
# $CONDA is an environment variable pointing to the root of the miniconda directory
echo $CONDA/bin >> $GITHUB_PATH
- name: Install dependencies
run: |
conda update conda && conda env update --file environment.yaml --name census-classifier
- name: Lint with flake8
run: |
conda install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
conda install pytest
pytest
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*/__pyache__/*
33 changes: 33 additions & 0 deletions .ipynb_checkpoints/EDA-checkpoint.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "c48ecd25-69ee-4dbe-901f-94e81dfe072d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
1 change: 0 additions & 1 deletion CODEOWNERS

This file was deleted.

46 changes: 0 additions & 46 deletions LICENSE.txt

This file was deleted.

27 changes: 27 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@

createconda:
conda create -n census-classifier "python=3.10" scikit-learn pandas numpy pytest jupyter jupyterlab fastapi uvicorn -c conda-forge
activate:
@echo "Run 'conda activate census-classifier' to activate the environment."
deactivate:
@echo "Run 'conda deactivate' to deactivate the environment."

install:
./scripts/run_in_conda.sh census-classifier "conda install -y flake8 pytest pytest-xdist autopep8 black isort"

lint:
./scripts/run_in_conda.sh census-classifier "flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics && flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics && \
autopep8 --in-place --aggressive --aggressive -r ./src && \
pylint ./src"

autolint:
./scripts/run_in_conda.sh census-classifier "autopep8 --in-place --aggressive --aggressive --recursive . && isort . && black ."

test:
./scripts/run_in_conda.sh census-classifier "pytest -n 4"

train:
./scripts/run_in_conda.sh census-classifier "python3 src/train_model.py"

score:
./scripts/run_in_conda.sh census-classifier "python3 src/score_model.py"
76 changes: 54 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,41 +3,73 @@ Working in a command line environment is recommended for ease of use with git an
# Environment Set up
* Download and install conda if you don’t have it already.
* Use the supplied requirements file to create a new environment, or
* conda create -n [envname] "python=3.8" scikit-learn pandas numpy pytest jupyter jupyterlab fastapi uvicorn -c conda-forge
* conda create -n [envname] "python=3.8" scikit-learn dvc pandas numpy pytest jupyter jupyterlab fastapi uvicorn -c conda-forge
* Install git either through conda (“conda install git”) or through your CLI, e.g. sudo apt-get git.

## Repositories
* Create a directory for the project and initialize git.
* As you work on the code, continually commit changes. Trained models you want to use in production must be committed to GitHub.
* Connect your local git repo to GitHub.
* Setup GitHub Actions on your repo. You can use one of the pre-made GitHub Actions if at a minimum it runs pytest and flake8 on push and requires both to pass without error.
* Make sure you set up the GitHub Action to have the same version of Python as you used in development.

# Data
* Download census.csv and commit it to dvc.

* Create a directory for the project and initialize Git and DVC.
* As you work on the code, continually commit changes. Trained models you want to keep must be committed to DVC.
* Connect your local Git repository to GitHub.

## Set up S3

* In your CLI environment install the<a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html" target="_blank"> AWS CLI tool</a>.
* In the navigation bar in the Udacity classroom select **Open AWS Gateway** and then click **Open AWS Console**. You will not need the AWS Access Key ID or Secret Access Key provided here.
* From the Services drop down select S3 and then click Create bucket.
* Give your bucket a name, the rest of the options can remain at their default.

To use your new S3 bucket from the AWS CLI you will need to create an IAM user with the appropriate permissions. The full instructions can be found <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console" target="_blank">here</a>, what follows is a paraphrasing:

* Sign in to the IAM console <a href="https://console.aws.amazon.com/iam/" target="_blank">here</a> or from the Services drop down on the upper navigation bar.
* In the left navigation bar select **Users**, then choose **Add user**.
* Give the user a name and select **Programmatic access**.
* In the permissions selector, search for S3 and give it **AmazonS3FullAccess**
* Tags are optional and can be skipped.
* After reviewing your choices, click create user.
* Configure your AWS CLI to use the Access key ID and Secret Access key.

## GitHub Actions

* Setup GitHub Actions on your repository. You can use one of the pre-made GitHub Actions if at a minimum it runs pytest and flake8 on push and requires both to pass without error.
* Make sure you set up the GitHub Action to have the same version of Python as you used in development.
* Add your <a href="https://github.com/marketplace/actions/configure-aws-credentials-action-for-github-actions" target="_blank">AWS credentials to the Action</a>.
* Set up <a href="https://github.com/iterative/setup-dvc" target="_blank">DVC in the action</a> and specify a command to `dvc pull`.

## Data

* Download census.csv from the data folder in the starter repository.
* Information on the dataset can be found <a href="https://archive.ics.uci.edu/ml/datasets/census+income" target="_blank">here</a>.
* Create a remote DVC remote pointing to your S3 bucket and commit the data.
* This data is messy, try to open it in pandas and see what you get.
* To clean it, use your favorite text editor to remove all spaces.
* Commit this modified data to DVC under a new name (we often want to keep the raw data untouched but then can keep updating the cooked version).

## Model

# Model
* Using the starter code, write a machine learning model that trains on the clean data and saves the model. Complete any function that has been started.
* Write unit tests for at least 3 functions in the model code.
* Write a function that outputs the performance of the model on slices of the data.
* Suggestion: for simplicity, the function can just output the performance on slices of just the categorical features.
* Suggestion: for simplicity, the function can just output the performance on slices of just the categorical features.
* Write a model card using the provided template.

# API Creation
* Create a RESTful API using FastAPI this must implement:
* GET on the root giving a welcome message.
* POST that does model inference.
* Type hinting must be used.
* Use a Pydantic model to ingest the body from POST. This model should contain an example.
* Hint: the data has names with hyphens and Python does not allow those as variable names. Do not modify the column names in the csv and instead use the functionality of FastAPI/Pydantic/etc to deal with this.
## API Creation

* Create a RESTful API using FastAPI this must implement:
* GET on the root giving a welcome message.
* POST that does model inference.
* Type hinting must be used.
* Use a Pydantic model to ingest the body from POST. This model should contain an example.
* Hint: the data has names with hyphens and Python does not allow those as variable names. Do not modify the column names in the csv and instead use the functionality of FastAPI/Pydantic/etc to deal with this.
* Write 3 unit tests to test the API (one for the GET and two for POST, one that tests each prediction).

# API Deployment
## API Deployment

* Create a free Heroku account (for the next steps you can either use the web GUI or download the Heroku CLI).
* Create a new app and have it deployed from your GitHub repository.
* Enable automatic deployments that only deploy if your continuous integration passes.
* Hint: think about how paths will differ in your local environment vs. on Heroku.
* Hint: development in Python is fast! But how fast you can iterate slows down if you rely on your CI/CD to fail before fixing an issue. I like to run flake8 locally before I commit changes.
* Enable automatic deployments that only deploy if your continuous integration passes.
* Hint: think about how paths will differ in your local environment vs. on Heroku.
* Hint: development in Python is fast! But how fast you can iterate slows down if you rely on your CI/CD to fail before fixing an issue. I like to run flake8 locally before I commit changes.
* Set up DVC on Heroku using the instructions contained in the starter directory.
* Set up access to AWS on Heroku, if using the CLI: `heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy`
* Write a script that uses the requests module to do one POST on your live API.
11 changes: 11 additions & 0 deletions app.log
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
2024-05-30 23:30:54,502 - root - INFO - Loading census data...
2024-05-30 23:30:54,535 - root - INFO - Census data loaded successfully.
2024-05-30 23:30:54,535 - root - INFO - Splitting data into train and test sets...
2024-05-30 23:30:54,541 - root - INFO - Data split into train and test sets successfully.
2024-05-30 23:30:54,602 - root - INFO - Processing test data...
2024-05-30 23:30:54,614 - root - INFO - Test data processed successfully.
2024-05-30 23:30:54,614 - root - INFO - Running model inference...
2024-05-30 23:30:54,732 - root - INFO - Model inference completed successfully.
2024-05-30 23:30:54,732 - root - INFO - Computing model metrics...
2024-05-30 23:30:54,736 - root - INFO - Model metrics - Precision: 0.8940092165898618, Recall: 0.7376425855513308, F-beta: 0.8083333333333333
2024-05-30 23:30:54,736 - root - INFO - Computing model performance on slices...
Loading
Loading