Blender Tests
Testing pyddg’s Blender API
Blender tests are located in
testing/tests/ddg/visualization/blendertesting/tests/ddg/conversion/blendertesting/tests/examplesddg/visualization/blender(for doctests)ddg/conversion/blender(for doctests)
The default Blender test configuration is testing/utils/pytest-conf-blender.ini.
Every convention in Python Tests also applies to Blender tests, but there is more:
Blender functions are usually meant to create, mutate or delete Blender data.
Some also create intermediate data for internal use.
Make sure that tests assert that bpy.data.objects, bpy.data.meshes, bpy.data.curves and so forth have the expected states, for example
def f():
mesh = ...
bobj = ...
def test_f():
f()
assert list(bpy.data.objects) == ...
assert list(bpy.data.meshes) == ...
In particular, make sure that all intermediate data is cleaned up. Blender data is cleared before and after each test, so the tests won’t fail if there is leftover intermediate data!
By default, Blender creates at least a collection, a cube, a camera and a light on startup. These are also cleared before any tests run, so you don’t need to worry about them.
Testing Blender examples
Any Python script in examples/blender is picked up by testing/tests/examples/blender/test_blender_examples.py and executed in a separate Blender process.
Each Python script corresponds to a separate test.
If the script raises an exception, then the test fails.
Snapshot tests
Additionally, it is possible to render an image and compare it to a reference image tracked in the repository. Comparing the test output to a reference file is called snapshot testing. We use syrupy, a pytest plugin for snapshot testing. Check out the basic usage section in syrupy’s documentation to see how it works.
Adding a snapshot test
It is necessary to opt into snapshot tests: At the end of the script, add
from testing.tests.examples.blender.snapshot import opt_in # noqa: E402
opt_in()
Then, run
python3 test.py --blender-examples -- testing/tests/examples/blender/test_blender_examples.py::test_example[path-to-script] --snapshot-update -s
Commit the (new) image in testing/tests/examples/blender/__snapshots__/test_blender_examples.
Then check:
The image renders in a reasonable time even on the (rather slow) CI servers. Prefer Eevee over Cycles for this reason.
After possibly lowering the sample size and resolution, make sure that the image’s quality is still acceptable.
The image must be small with regard to file size, preferably 100 kB or less. Prefer JPEG over PNG for this reason.
AMEND THE COMMIT until every point on the checklist is met.
Remember that once an image makes it into develop, then Git will track all versions of it, now and forever, even if the image is large.
This is why you need to amend the commit.
Note
If you get RuntimeError: Error: Cannot render, no camera, then you need to set bpy.context.scene.camera = your_camera_object.
What to do when a snapshot test fails
The reference images are located in testing/tests/examples/blender/__snapshots__/test_blender_examples.
The images rendered by the tests are located in var and so are the blend files.
Rerunning the tests will overwrite the files in var!
The CI jobs retain var as well and you can download the job artifacts or even view the pictures on GitLab itself.
If the images look different to the human eye, then you’ve either introduced a bug or you need to update the snapshot as shown above.
The assertion should fail with
E AssertionError:
E 0 <= mean_structural_similarity_index = 0.9392680081736323 <= 1
E higher index => more similar
E the minimum similarity is 1.0
If they look indistinguishable to human eye, perhaps the minimum similarity index in the tests needs to be adjusted to a lower value. You can also compute the mean structural similarity index manually with
$ python3 testing/utils/similarity.py path-to-image-1 path-to-image-2
0 <= mean_structural_similarity_index = 0.9984706916942785 <= 1
higher index => more similar