Class GraphModel<ModelURL>

A tf.GraphModel is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.

A tf.GraphModel can only be created by loading from a model converted from a TensorFlow SavedModel using the command line converter tool and loaded via tf.loadGraphModel.

Type Parameters

  • ModelURL extends Url = string | io.IOHandler

Implements

  • InferenceModel

Constructors

  • Type Parameters

    • ModelURL extends Url = string | IOHandler

    Parameters

    • modelUrl: ModelURL

      url for the model, or an io.IOHandler.

    • OptionalloadOptions: LoadOptions
    • Optionaltfio: __module

    Returns GraphModel<ModelURL>

Accessors

  • get inputNodes(): string[]
  • Returns string[]

  • get inputs(): TensorInfo[]
  • Return the array of input tensor info.

    Returns TensorInfo[]

  • get metadata(): {}
  • Returns {}

    • get modelSignature(): {}
    • Returns {}

      • get modelStructuredOutputKeys(): {}
      • Returns {}

        • get modelVersion(): string
        • Returns string

        • get outputNodes(): string[]
        • Returns string[]

        • get outputs(): TensorInfo[]
        • Return the array of output tensor info.

          Returns TensorInfo[]

        • get weights(): NamedTensorsMap
        • Returns NamedTensorsMap

        Methods

        • Releases the memory used by the weight tensors and resourceManager.

          Returns void

        • Dispose intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

          Returns void

        • Executes inference for the model for given input tensors.

          Parameters

          • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

            tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

          • Optionaloutputs: string | string[]

            output node name from the TensorFlow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

          Returns Tensor<Rank> | Tensor<Rank>[]

          A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.

        • Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.

          Parameters

          • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

            tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

          • Optionaloutputs: string | string[]

            output node name from the TensorFlow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

          Returns Promise<Tensor<Rank> | Tensor<Rank>[]>

          A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.

        • Get intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).

          Returns NamedTensorsMap

        • Loads the model and weight files, construct the in memory weight map and compile the inference graph.

          Returns UrlIOHandler<ModelURL> extends IOHandlerSync
              ? boolean
              : Promise<boolean>

        • Synchronously construct the in memory weight map and compile the inference graph.

          Parameters

          • artifacts: ModelArtifacts

          Returns boolean

        • Execute the inference for the input tensors.

          Parameters

          • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap
          • Optionalconfig: ModelPredictConfig

            Prediction configuration for specifying the batch size. Currently the batch size option is ignored for graph model.

          Returns Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap

          Inference result tensors. If the model is converted and it originally had structured_outputs in tensorflow, then a NamedTensorMap will be returned matching the structured_outputs. If no structured_outputs are present, the output will be single tf.Tensor if the model has single output node, otherwise Tensor[].

          GraphModel.inputNodes

          You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));

          This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.

          For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].

        • Execute the inference for the input tensors in async fashion, use this method when your model contains control flow ops.

          Parameters

          • inputs: Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap
          • Optionalconfig: ModelPredictConfig

            Prediction configuration for specifying the batch size. Currently the batch size option is ignored for graph model.

          Returns Promise<Tensor<Rank> | Tensor<Rank>[] | NamedTensorMap>

          A Promise of inference result tensors. If the model is converted and it originally had structured_outputs in tensorflow, then a NamedTensorMap will be returned matching the structured_outputs. If no structured_outputs are present, the output will be single tf.Tensor if the model has single output node, otherwise Tensor[].

          GraphModel.inputNodes

          You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));

          This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.

          For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].

        • Save the configuration and/or weights of the GraphModel.

          An IOHandler is an object that has a save method of the proper signature defined. The save method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js provides IOHandler implementations for a number of frequently used saving mediums, such as tf.io.browserDownloads and tf.io.browserLocalStorage. See tf.io for more details.

          This method also allows you to refer to certain types of IOHandlers as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.

          Example 1: Save model's topology and weights to browser local storage; then load it back.

          const modelUrl =
          'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
          const model = await tf.loadGraphModel(modelUrl);
          const zeros = tf.zeros([1, 224, 224, 3]);
          model.predict(zeros).print();

          const saveResults = await model.save('localstorage://my-model-1');

          const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
          console.log('Prediction from loaded model:');
          model.predict(zeros).print();

          Parameters

          • handlerOrURL: string | IOHandler

            An instance of IOHandler or a URL-like, scheme-based string shortcut for IOHandler.

          • Optionalconfig: SaveConfig

            Options for saving the model.

          Returns Promise<SaveResult>

          A Promise of SaveResult, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.