feat: Migrate sample and sample tests - Batch 2 of 5#12
feat: Migrate sample and sample tests - Batch 2 of 5#12leahecole merged 15 commits intosample-final-stagefrom
Conversation
338b99d to
043ae58
Compare
leahecole
left a comment
There was a problem hiding this comment.
I didn't go super in depth to each of them, but looked at the first three and found some common changes that I suspect are going to be common throughout
- Avoid using globals, and instead emulate the pattern found in this dialogflow test for passing the variable to the fixture
- Avoid testing the output of a fixture - keep your test logic to your tests. Instead, in the fixture, if you're worried about an operation failing or not being able to occur, utilize try/excepts (i.e., try to delete an endpoint, except notfound, print already deleted)
- Make sure you do have an assertion based on the output in the test itself
- Do there need to be as many print statements as there are in the samples, or is there a prettier way to get the user the info they need? (Either less info, or writing it to a file?) - this is the question I feel least strongly about and am inclined to trust you uCAIP folks
samples/tests/create_training_pipeline_image_classification_test.py
Outdated
Show resolved
Hide resolved
samples/tests/create_training_pipeline_image_classification_test.py
Outdated
Show resolved
Hide resolved
samples/tests/create_training_pipeline_image_classification_test.py
Outdated
Show resolved
Hide resolved
samples/create_training_pipeline_image_classification_sample.py
Outdated
Show resolved
Hide resolved
043ae58 to
2f373d9
Compare
leahecole
left a comment
There was a problem hiding this comment.
Can you ensure that the sample names and the test names match? I'm guessing that create_training_pipeline_tabular_regression_sample and create_training_pipeline_tables_regression_test are related, as well as create_training_pipeline_tabular_classification_sample and create_training_pipeline_tables_classification_test
Unless I'm wrong and those are different in which case it seems like the Prs might be a little mixed up
Additionally, your tests are missing assertions entirely. You don't want them to be in the fixture, but you do want to make sure that you are making some assertions in the tests themselves
|
Hi @leahecole - I've addressed requested changes:
#11 has been merged which contains |
leahecole
left a comment
There was a problem hiding this comment.
nit for all - direcotry reorg doesn't have to happen yet, but test names should match sample names, just with _test on the end :)
With some of the assertions, you are only looking for response - I want to double check that they won't give you a false positive if the response is blank/an error - double check that response is only going to happen if it for realz passes, and if not, adding another assertion to make it stronger
In some samples, you have a specific number of node hours declared - is this something that will be clear to the user? are these numbers arbitrary? if it's not going to be specified in accompanying docs, might be worth adding some comments, especially if these are decisions that will affect billing
|
approving and merging to next stage despite test failures - no test failures when we go to master though :) |
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #9 (Batch 2 of 5) 🦕