Blender Tests ============= .. contents:: Table of contents :local: :backlinks: none Testing pyddg's Blender API --------------------------- Blender tests are located in - ``testing/tests/ddg/blender`` - ``testing/tests/ddg/conversion/blender`` - ``testing/tests/examples`` - ``ddg/blender`` (for doctests) - ``ddg/conversion/blender`` (for doctests) The default Blender test configuration is ``testing/utils/pytest-conf-blender.ini``. Every convention in :doc:`python_tests` also applies to Blender tests, but there is more: Blender functions are usually meant to create, mutate or delete Blender data. Some also create intermediate data for internal use. Make sure that tests assert that ``bpy.data.objects``, ``bpy.data.meshes``, ``bpy.data.curves`` and so forth have the expected states, for example .. code-block:: python def f(): mesh = ... bobj = ... def test_f(): f() assert list(bpy.data.objects) == ... assert list(bpy.data.meshes) == ... In particular, make sure that all intermediate data is cleaned up. :ref:`Blender data is cleared before and after each test `, so the tests won't fail if there is leftover intermediate data! By default, Blender creates at least a collection, a cube, a camera and a light on startup. These are also cleared before any tests run, so you don't need to worry about them. Testing Blender examples ------------------------ Snapshot tests ~~~~~~~~~~~~~~ Blender examples are located in `examples/blender`. They are usually scripts or even small libraries whose output can be rendered to obtain pretty pictures. The *Blender example tests* render and compare these images to reference images tracked in the repository. Comparing the test output to a reference file is called *snapshot testing*. We use `syrupy `__, a pytest plugin for snapshot testing. Check out the `basic usage section in syrupy's documentation `__ to see how it works. Adding a snapshot test ++++++++++++++++++++++ For a single snapshot, it is enough to add a `snapshot` parameter to the test and to run the script and call `testing.tests.examples.blender.test_examples.assert_similar_snapshot`. Syrupy will provide the test function with a snapshot. For example: .. literalinclude:: ../../../../testing/tests/examples/blender/test_examples.py :language: python :pyobject: test_rendering Then, run .. code-block:: bash python3 test.py --blender-examples -- testing/tests/examples/blender/test_examples.py::test_rendering --snapshot-update -s Commit the (new) image in `testing/tests/examples/blender/__snapshots__/test_blender_examples`. Then check: - The image renders in a reasonable time even on the (rather slow) CI servers. Use Eevee or Cycles with *few* samples with denoising enabled, less than 16 samples are likely enough. - After possibly lowering the sample size and resolution, make sure that the image's quality is still acceptable. - The image must be small with regard to file size, preferably 100 kB or less. Prefer JPEG over PNG for this reason. - Changing the script actually causes the test to fail when you run it *without* `--snapshot-update`. If it doesn't fail, make sure to set the `min_similarity` parameter of :py:func:`testing.tests.examples.blender.test_examples.assert_similar_snapshot` to a value that causes the test to fail as expected. *AMEND THE COMMIT* until every point on the checklist is met. Remember that once an image makes it into `develop`, then Git will track all versions of it, now and forever, even if the image is large. This is why you need to amend the commit. .. note:: If you get `RuntimeError: Error: Cannot render, no camera`, then you need to set `bpy.context.scene.camera = your_camera_object`. Sometimes the snapshot test requires changes to the script, e.g. different render settings. You can run arbitrary code before rendering: .. literalinclude:: ../../../../testing/tests/examples/blender/test_examples.py :language: python :pyobject: test_caustics What to do when a snapshot test fails +++++++++++++++++++++++++++++++++++++ The reference images are located in `testing/tests/examples/blender/__snapshots__/test_examples`. The images rendered by the tests are located in `var` and so are the `.blend` files. Rerunning the tests will overwrite the files in `var`! The CI jobs retain `var` as well and you can download the job artifacts or even view the pictures on GitLab itself. If the images look different to the human eye, then you've either introduced a bug or you need to update the snapshot as shown above. The assertion should fail with .. code-block:: bash E AssertionError: E 0 <= mean_structural_similarity_index = 0.9392680081736323 <= 1 E higher index => more similar E the minimum similarity is 1.0 If they look indistinguishable to human eye, perhaps the minimum similarity index in the tests needs to be adjusted to a lower value. This is the purpose of the `min_similarity` parameter of :py:func:`testing.tests.examples.blender.snapshots.assert_similar_snapshot`. You can also compute the `mean structural similarity index `__ manually with .. code-block:: bash $ python3 testing/utils/similarity.py path-to-image-1 path-to-image-2 0 <= mean_structural_similarity_index = 0.9984706916942785 <= 1 higher index => more similar Multiple snapshot tests for the same example ++++++++++++++++++++++++++++++++++++++++++++ Use a combination of - :py:func:`testing.tests.examples.blender.snapshots.save_blend_file` - :py:func:`testing.tests.examples.blender.snapshots.default_image_settings` - :py:func:`testing.tests.examples.blender.snapshots.render_and_compare_with_snapshot` For example, .. literalinclude:: ../../../../testing/tests/examples/blender/test_examples.py :language: python :pyobject: test_pascal It would also be possible to write multiple separate tests. For sufficiently complex examples, for instance if they can be parametrized, consider writing a function `f` depending on these parameters which creates the example. The script can guard execution of this function with .. code-block:: python def f(a, b, c, d): pass if __name__ == "__main__": f(0, 1, 2, 3) and the test(s) can import and call `f` as many times as needed. Why test for similarity rather than equality? +++++++++++++++++++++++++++++++++++++++++++++ `Cycles isn't deterministic `__. Eevee isn't either. If you need transparency ++++++++++++++++++++++++ Eevee doesn't seem to handle transparent materials very well and appears to be non-deterministic this case. If you really need this, use Cycles with with *few* samples with denoising enabled.