Skip to content

coremltools 5.0

Compare
Choose a tag to compare
@TobyRoseman TobyRoseman released this 04 Oct 19:17
· 281 commits to main since this release
89d058f

What’s New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. The default is float16. Usage example:
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Adds the compute_units parameter to MLModel and coremltools.convert. This matches the MLComputeUnits in Swift and Objective-C. Use this parameter to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • Python 3.9 Support
  • Native M1 support for Python 3.8 and 3.9
  • Support for TensorFlow 2.5
  • Support Torch 1.9.0
  • New Torch ops: affine_grid_generator, einsum, expand, grid_sampler, GRU, linear, index_put maximum, minimum, SiLUs, sort, torch_tensor_assign, zeros_like.
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS:
    ct.convert(....., skip_model_load=True)
  • Various bug fixes, optimizations and additional testing.

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.
  • The useCPUOnly parameter for MLModel and MLModel.predicthas been deprecated. Instead, use the compute_units parameter for MLModel and coremltools.convert.