Blender Tests
Testing pyddg’s Blender API
Blender tests are located in
testing/tests/ddg/blendertesting/tests/ddg/conversion/blendertesting/tests/examplesddg/blender(for doctests)ddg/conversion/blender(for doctests)
The default Blender test configuration is testing/utils/pytest-conf-blender.ini.
Every convention in Python Tests also applies to Blender tests, but there is more:
Blender functions are usually meant to create, mutate or delete Blender data.
Some also create intermediate data for internal use.
Make sure that tests assert that bpy.data.objects, bpy.data.meshes, bpy.data.curves and so forth have the expected states, for example
def f():
mesh = ...
bobj = ...
def test_f():
f()
assert list(bpy.data.objects) == ...
assert list(bpy.data.meshes) == ...
In particular, make sure that all intermediate data is cleaned up. Blender data is cleared before and after each test, so the tests won’t fail if there is leftover intermediate data!
By default, Blender creates at least a collection, a cube, a camera and a light on startup. These are also cleared before any tests run, so you don’t need to worry about them.
Testing Blender examples
Snapshot tests
Blender examples are located in examples/blender.
They are usually scripts or even small libraries whose output can be rendered to obtain pretty pictures.
The Blender example tests render and compare these images to reference images tracked in the repository.
Comparing the test output to a reference file is called snapshot testing.
We use syrupy, a pytest plugin for snapshot testing.
Check out the basic usage section in syrupy’s documentation to see how it works.
Adding a snapshot test
For a single snapshot, it is enough to add a snapshot parameter to the test and to run the script and call testing.tests.examples.blender.test_examples.assert_similar_snapshot.
Syrupy will provide the test function with a snapshot.
For example:
def test_rendering(snapshot: SnapshotAssertion) -> None:
# Import the example script for the side effects.
import examples.blender.docs.rendering as _
# Set reasonable image settings, save "var/rendering.blend", render
# "var/rendering.jpg" and compare this image to the snapshot.
assert_similar_snapshot(snapshot, "rendering")
Then, run
python3 test.py --blender-examples -- testing/tests/examples/blender/test_examples.py::test_rendering --snapshot-update -s
Commit the (new) image in testing/tests/examples/blender/__snapshots__/test_blender_examples.
Then check:
The image renders in a reasonable time even on the (rather slow) CI servers. Use Eevee or Cycles with few samples with denoising enabled, less than 16 samples are likely enough.
After possibly lowering the sample size and resolution, make sure that the image’s quality is still acceptable.
The image must be small with regard to file size, preferably 100 kB or less. Prefer JPEG over PNG for this reason.
Changing the script actually causes the test to fail when you run it without
--snapshot-update. If it doesn’t fail, make sure to set themin_similarityparameter oftesting.tests.examples.blender.test_examples.assert_similar_snapshot()to a value that causes the test to fail as expected.
AMEND THE COMMIT until every point on the checklist is met.
Remember that once an image makes it into develop, then Git will track all versions of it, now and forever, even if the image is large.
This is why you need to amend the commit.
Note
If you get RuntimeError: Error: Cannot render, no camera, then you need to set bpy.context.scene.camera = your_camera_object.
Sometimes the snapshot test requires changes to the script, e.g. different render settings. You can run arbitrary code before rendering:
def test_caustics(snapshot: SnapshotAssertion) -> None:
# Import the example script for the side effects.
from examples.blender.docs.examples.caustics import (
camera_theoretical,
collection_mathematical,
collection_physical,
)
# use evee for rendering
ddg.blender.render.setup_eevee_renderer()
# snapshot only one construction
collection_physical.hide_render = True
collection_mathematical.children["nephroid"].hide_render = True
# choose camera for snapshot testing
bpy.context.scene.camera = camera_theoretical
assert_similar_snapshot(snapshot, "caustics")
What to do when a snapshot test fails
The reference images are located in testing/tests/examples/blender/__snapshots__/test_examples.
The images rendered by the tests are located in var and so are the blend files.
Rerunning the tests will overwrite the files in var!
The CI jobs retain var as well and you can download the job artifacts or even view the pictures on GitLab itself.
If the images look different to the human eye, then you’ve either introduced a bug or you need to update the snapshot as shown above.
The assertion should fail with
E AssertionError:
E 0 <= mean_structural_similarity_index = 0.9392680081736323 <= 1
E higher index => more similar
E the minimum similarity is 1.0
If they look indistinguishable to human eye, perhaps the minimum similarity index in the tests needs to be adjusted to a lower value.
This is the purpose of the min_similarity parameter of testing.tests.examples.blender.snapshots.assert_similar_snapshot().
You can also compute the mean structural similarity index manually with
$ python3 testing/utils/similarity.py path-to-image-1 path-to-image-2
0 <= mean_structural_similarity_index = 0.9984706916942785 <= 1
higher index => more similar
Multiple snapshot tests for the same example
Use a combination of
testing.tests.examples.blender.snapshots.save_blend_file()
testing.tests.examples.blender.snapshots.default_image_settings()
testing.tests.examples.blender.snapshots.render_and_compare_with_snapshot()
For example,
def test_pascal(snapshot: SnapshotAssertion) -> None:
# Import the example script for the side effects.
from examples.blender.geometry.pascal import brianchon_collection, pascal_collection
# Saves "var/pascal.blend".
save_blend_file("pascal")
pascal_collection.hide_render = False
brianchon_collection.hide_render = True
# This renders an image "var/pascal.jpg". Syrupy creates/compares to a
# snapshot named "test_pascal.jpg".
pascal_image_path = default_image_settings("pascal")
render_and_compare_with_snapshot(snapshot, pascal_image_path)
pascal_collection.hide_render = True
brianchon_collection.hide_render = False
# This renders an image "var/pascal.1.jpg". It's possible to choose a more
# descriptive name such as "brianchon". However, Syrupy creates/compares to
# a new snapshot named "test_pascal.1.jpg", so we add ".1" for the names to
# match and to emphasise that these pictures belong to the same test.
brianchon_image_path = default_image_settings("pascal.1")
render_and_compare_with_snapshot(snapshot, brianchon_image_path)
It would also be possible to write multiple separate tests.
For sufficiently complex examples, for instance if they can be parametrized, consider writing a function f depending on these parameters which creates the example.
The script can guard execution of this function with
def f(a, b, c, d):
pass
if __name__ == "__main__":
f(0, 1, 2, 3)
and the test(s) can import and call f as many times as needed.
Why test for similarity rather than equality?
Cycles isn’t deterministic. Eevee isn’t either.
If you need transparency
Eevee doesn’t seem to handle transparent materials very well and appears to be non-deterministic this case. If you really need this, use Cycles with with few samples with denoising enabled.