Blender Tests ============= .. contents:: Table of contents :local: :backlinks: none Testing pyddg's Blender API --------------------------- Blender tests are located in - ``testing/tests/ddg/visualization/blender`` - ``testing/tests/ddg/conversion/blender`` - ``testing/tests/examples`` - ``ddg/visualization/blender`` (for doctests) - ``ddg/conversion/blender`` (for doctests) The default Blender test configuration is ``testing/utils/pytest-conf-blender.ini``. Every convention in :doc:`python_tests` also applies to Blender tests, but there is more: Blender functions are usually meant to create, mutate or delete Blender data. Some also create intermediate data for internal use. Make sure that tests assert that ``bpy.data.objects``, ``bpy.data.meshes``, ``bpy.data.curves`` and so forth have the expected states, for example .. code-block:: python def f(): mesh = ... bobj = ... def test_f(): f() assert list(bpy.data.objects) == ... assert list(bpy.data.meshes) == ... In particular, make sure that all intermediate data is cleaned up. :ref:`Blender data is cleared before and after each test `, so the tests won't fail if there is leftover intermediate data! By default, Blender creates at least a collection, a cube, a camera and a light on startup. These are also cleared before any tests run, so you don't need to worry about them. Testing Blender examples ------------------------ Any Python script in `examples/blender` is picked up by `testing/tests/examples/blender/test_blender_examples.py` and executed in a separate Blender process. Each Python script corresponds to a separate test. If the script raises an exception, then the test fails. Snapshot tests ~~~~~~~~~~~~~~ Additionally, it is possible to render an image and compare it to a reference image tracked in the repository. Comparing the test output to a reference file is called *snapshot testing*. We use `syrupy `__, a pytest plugin for snapshot testing. Check out the `basic usage section in syrupy's documentation `__ to see how it works. Adding a snapshot test ++++++++++++++++++++++ It is necessary to *opt into* snapshot tests: At the end of the script, add .. literalinclude:: ../../../../examples/blender/docs/rendering.py :start-after: [snapshot] :end-before: [snapshot] Then, run .. code-block:: bash python3 test.py --blender-examples -- testing/tests/examples/blender/test_blender_examples.py::test_example[path-to-script] --snapshot-update -s Commit the (new) image in `testing/tests/examples/blender/__snapshots__/test_blender_examples`. Then check: - The image renders in a reasonable time even on the (rather slow) CI servers. Prefer Eevee over Cycles for this reason. - After possibly lowering the sample size and resolution, make sure that the image's quality is still acceptable. - The image must be small with regard to file size, preferably 100 kB or less. Prefer JPEG over PNG for this reason. *AMEND THE COMMIT* until every point on the checklist is met. Remember that once an image makes it into `develop`, then Git will track all versions of it, now and forever, even if the image is large. This is why you need to amend the commit. .. note:: If you get `RuntimeError: Error: Cannot render, no camera`, then you need to set `bpy.context.scene.camera = your_camera_object`. What to do when a snapshot test fails +++++++++++++++++++++++++++++++++++++ The reference images are located in `testing/tests/examples/blender/__snapshots__/test_blender_examples`. The images rendered by the tests are located in `var` and so are the `.blend` files. Rerunning the tests will overwrite the files in `var`! The CI jobs retain `var` as well and you can download the job artifacts or even view the pictures on GitLab itself. If the images look different to the human eye, then you've either introduced a bug or you need to update the snapshot as shown above. The assertion should fail with .. code-block:: bash E AssertionError: E 0 <= mean_structural_similarity_index = 0.9392680081736323 <= 1 E higher index => more similar E the minimum similarity is 1.0 If they look indistinguishable to human eye, perhaps the minimum similarity index in the tests needs to be adjusted to a lower value. You can also compute the `mean structural similarity index `__ manually with .. code-block:: bash $ python3 testing/utils/similarity.py path-to-image-1 path-to-image-2 0 <= mean_structural_similarity_index = 0.9984706916942785 <= 1 higher index => more similar Why test for similarity rather than equality? +++++++++++++++++++++++++++++++++++++++++++++ `Cycles isn't deterministic `__.