Skip to content

Gradient operator does not work correctly

The gradient operator does not work well with the actual ComputationalGraph representation of operators. Actually a graph has his dependencies based on DiscreteFields which may contain one or more components. All those components can only have one topology state at a time, preventing or blocking operators that rely on different topology states for a subset of components.

The idea is to break all DiscreteFields into DiscreteScalarFields which will be embedded into a numpy-array like structure (DiscreteTensorField) that will replace actual DiscreteFields and keep their actual interface whenever possible.

ComputationalGraphs will be then created from those scalar fields.

This will allow all sorts of gains:

  • General Tensor shaped fields (example: gradient of velocity has 3x3=9 components in 3D).
  • Numpy-like views on fields with a reduced number of components: V[0], V[2:3], ...
  • Tensor shaped fields views on different fields, this may greatly simplify operator's implementations.
  • Operator graphs with fine grained dependencies, more parallelism exposed, no need to transpose every field components, ...

This is a huge codebase change the library:

  • Implementation of a new hierarchy of continuous fields: ScalarField and TensorField on the top of an abstract field container interface FieldContainerI. A ScalarField is just like the current Field type with nb_components=1. Field will still accessible as a type alias for backward compatibility, yielding either a ScalarField or a TensorField depending on input parameters. Instance checking will also work on this new type.

continuous_fields

  • Implementation of a new hierarchy of discrete fields: DiscreteScalarField, DiscreteScalarFieldView (ie. a scalar field along with a topology state) will represent the old DiscreteField and DiscreteFieldView with nb_components=1. A DiscreteTensorField will represent a n-dimensional container of DiscreteScalarFieldView. Like the continuous fields, ScalarDiscreteFieldView and DiscreteTensorField inherit from a common base DiscreteScalarFieldViewContainerI. Topology specific discrete fields follow the same idea, and new cartesian discrete field types become: CartesianDiscreteScalarField, CartesianDiscreteTensorField and CartesianDiscreteScalarViewContainerI. DiscreteField is redefined to be (DiscreteScalarField, DiscreteTensorField) to maintain backward compatibility with type checking.

discrete_fields

  • Transparent tensor to scalar expansions: User supplied tensors should automatically be expanded to the scalar fields they contain along with their corresponding topology descriptors passed in variables. Each computational graph node (operator or graph) input_fields and output_fields will now only contain ScalarField as keys. Following the same idea input_discrete_fields and output_discrete_fields will only contain DiscreteScalarField as keys.
  • Discrete tensor fields reconstruction: All user-supplied tensor fields should be rebuilt after the discretization step. To avoid to interfere with the actual input and output field logic, tensor fields will be filtered out into specific attributes: input_tensor_fields and output_tensor_fields. Their discretization will be stored into input_discrete_tensor_fields and output_discrete_tensor_fields following their scalar counterpart. Specific care should be taken in OperatorGenerator-like types (Graph, OperatorGenerator, OperatorFrontend). As scalars may be scattered towards multiple contained or generated operators, there will be an uncertainty on the input or output status of a tensor. For those types, candidate tensors will be stored in specific attributes, candidate_input_tensors and candidate_output_tensors. They may then be built at discretization step if and only if all contained fields were discretized and are real inputs (resp. outputs).
  • Hide the distinction between scalar and tensor fields: Access to Field discretization (ScalarField => DiscreteScalarField and TensorField => DiscreteTensorField) will be unified by two new methods op.get_input_discrete_field(field) and op.get_output_discrete_field(field). This replaces actual direct access to op.input_discrete_fields[field] and op.output_discrete_fields[field] to allow tensors to be passed. A similar function is added to get the topology descriptor from a scalar or tensor field: op.get_topo_descriptor(variables, field), which replaces direct access to op.variables[field]
  • Deprecate some field methods: Field shape and size methods will be replaced by resolution and npoints so that they can not be confused with their tensor counterpart. Likewise field.__getitem__(i) and field.__call__(i) are removed in favor of explicit data accesses field.data[i] and field.buffers[i]. As a reminder, data are hysop wrapped backend dependent numpy.ndarray like structures based on the hysop.core.array.Array interface (HostArray and OpenClArray) and buffers are raw memory objects directly usable by external libraries (numpy.ndarray for HostArrayBackend and pyopencl.MemoryObject for OpenClArrayBackend).
  • Enforce topology state coherency: As each scalar component is now able to choose its own topology state (like its transposition state), topology state clash is becoming problematic. We need a way to enforce a single common topology state for all fields for a given operator in the graph builder.
  • Improved cartesian topology states: Add support for Fortran or C contiguous topology state, remove unused basis topology state. Add checks after topologies have been built.
  • MemoryReordering Operator: Implement a memory reordering operator which is basically a noop. This is actually required by the graph builder. Add check for memory ordering on all memory ordering critical operators (fortran and opencl).
  • Enforce state for all fields at operator level: In particular we should by default enforce transposition state and topology shape (with the option to relax those constraints before discretization).
  • Improved interface for Scalar and Tensor containers: Add iterators on compute_data and compute_buffers. Add simplified access to scalar data and buffer via sdata and sbuffer wich are shortcut for field.data[0] and field.buffers[0] for DiscreteScalarFields. Add dfields which will be an alias of discrete_field_views() like the continuous field counterpart.
  • Better logs: Merge back scalars to tensors when possible in graph reports. Allow multiline reports.

TODO list:

  • Merge and patch master
  • Patch all C ordered operators
  • Patch all fortran ordered operators
  • Patch all tests
  • Patch all examples
  • Patch all notebooks

MISCELLANEOUS:

  • Changed licence to Apache v2
  • Better README.md and badges.
  • Add gradient criteria to Taylor-Green
  • Add all timestep criteria to Taylor-Green for comparisson purposes.
  • Gradient and MultiDerivative as a directional operator generator
  • Convert DirectionalSplitting operators from ComputationalGraph to ComputationalNodeGenerator.
  • Deterministic topology creation
  • Fixed symbolic codegen kernel caching
  • Fixed opencl select promotion bug
  • Fixed bilevel OpenCL advection.

Features that may be implemented later:

  • Find a better way to initialize multi topology tensors, currently this is a bit of a mess.
  • Add iterators and __getitem__ to scalar containers if required to ease scalar/tensor agnostic implementations.
Edited by Jean-Baptiste Keck
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information