Skip to content

Conversation

ItsDrike
Copy link
Contributor

@ItsDrike ItsDrike commented Aug 6, 2021

The project has grown big enough that testing it manually is almost impossible. While simply running the bot and making sure that the newly introduce feature is working properly, in the process of introducing this feature other parts of the bot might've been affected and are now failing, however not all such failures fail in compile time, and since we don't actually run that code during our testing, because the feature wasn't supposed to touch that specific area, we won't know that there could be something affected. This leads to common bugs that go undiscovered, which could've been easily prevented by having automated unit-tests run as a github workflow.

Ideally, the tests will have 100% code coverage and each line of the bot's code will be ran and tested, however this is unrealistic with a bigger codebase and would take way too much time for a single pull request, this PR will therefore only introduce some tests, but it won't run everything.

To keep track of how much of the code is actually being tested, we use coverage.py.

In order to know the current code unit-test coverage, we will use coveralls.io service, which provide a status badge, which can be included in README.md:
Coverage Status

@ItsDrike ItsDrike added area: testing Unit Testing related changes priority: 0 - critical Needs to be addressed ASAP status: WIP Work in progress type: feature New feature or enhancement labels Aug 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: testing Unit Testing related changes priority: 0 - critical Needs to be addressed ASAP status: WIP Work in progress type: feature New feature or enhancement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant