Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input

Config

instance of Human

Constructors

  • Constructor for Human library that is futher used for all operations

    Parameters

    • OptionaluserConfig: Partial<Config>

      user configuration object Config

    Returns Human

Properties

config: Config

Current configuration

draw: draw = draw

Draw helper classes that can draw detected objects on canvas using specified draw

  • canvas: draws input to canvas
  • options: are global settings for all draw operations, can be overriden for each draw method DrawOptions
  • face, body, hand, gesture, object, person: draws detected results as overlays on canvas
env: Env = env

Object containing environment information used for diagnostics

events: undefined | EventTarget

Container for events dispatched by Human Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors
faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap: [number, number][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

match: match = match

Face Matching

  • similarity: compare two face descriptors and return similarity index
  • distance: compare two face descriptors and return raw calculated differences
  • find: compare face descriptor to array of face descriptors and return best match
performance: Record<string, number>

Performance object that contains values for all recently performed operations

process: {
    canvas: null | AnyCanvas;
    tensor: null | Tensor<Rank>;
}

currenty processed image tensor and canvas

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection
state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'
tf: any

Instance of TensorFlow/JS used by Human

  • Can be embedded or externally provided TFJS API
version: string

Current version of Human library in semver format

webcam: WebCam = ...

WebCam helper methods

Methods

  • internal function to measure tensor leaks

    Parameters

    • Rest...msg: string[]

    Returns void

  • Compare two input tensors for pixel similarity

    • use human.image to process any valid input and get a tensor that can be used for compare
    • when passing manually generated tensors:
    • both input tensors must be in format [1, height, width, 3]
    • if resolution of tensors does not match, second tensor will be resized to match resolution of the first tensor
    • return value is pixel similarity score normalized by input resolution and rgb channels

    Parameters

    Returns Promise<number>

  • emit event

    Parameters

    • event: string

    Returns void

  • Process input as return canvas and tensor

    Parameters

    • input: Input

      any input Input

    • getTensor: boolean = false

      should image processing also return tensor or just canvas Returns object with tensor and canvas

    Returns Promise<{
        canvas: null | AnyCanvas;
        tensor: null | Tensor4D;
    }>

  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    Returns Promise<void>

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    • result: Result = ...

      Result optional use specific result set to run interpolation on

    Returns Result

    result - Result

  • Utility wrapper for performance.now()

    Returns number

  • Run detect with tensorflow profiling

    • result object will contain total exeuction time information for top-20 kernels
    • actual detection object can be accessed via human.result

    Parameters

    Returns Promise<{
        kernel: string;
        perc: number;
        time: number;
    }[]>

  • Reset configuration to default values

    Returns void

  • Segmentation method takes any input and returns RGBA tensor Note: Segmentation is not triggered as part of detect process

    Parameters

    • input: Input

      Input Returns tensor which contains image data in RGBA format

    • OptionaluserConfig: Partial<Config>

    Returns Promise<null | Tensor<Rank>>

  • Helper function

    Parameters

    • ms: number

      sleep time in miliseconds

    Returns Promise<void>

  • Validate current configuration schema

    Parameters

    • OptionaluserConfig: Partial<Config>

    Returns {
        expected?: string;
        reason: string;
        where: string;
    }[]

  • Continously detect video frames

    Parameters

    • element: HTMLVideoElement

      HTMLVideoElement input

    • run: boolean = true

      boolean run continously or stop if already running, default true

    • delay: number = 0

      number delay detection between frames for number of miliseconds, default 0

    Returns Promise<void>

  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    Returns Promise<undefined | Result>

    result - Result