Options
All
  • Public
  • Public/Protected
  • All
Menu

Human* library main class

All methods and properties are available only as members of Human class

  • Configuration object definition: Config
  • Results object definition: Result
  • Possible inputs: Input
param userConfig

Config

returns

instance of Human

Hierarchy

  • Human

Index

Constructors

constructor

  • Constructor for Human library that is futher used for all operations

    Parameters

    • Optional userConfig: Partial<Config>

      user configuration object Config

    Returns Human

Properties

config

config: Config

Current configuration

distance

distance: (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number = match.distance

Type declaration

    • Calculates distance between two descriptors

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: MatchOptions = ...

        calculation options

        • order - algorithm to use Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2
        • multiplier - by how much to enhance difference analysis in range of 1..100 default is 20 which normalizes results to similarity above 0.5 can be considered a match

      Returns number

draw

draw: { all: (inCanvas: AnyCanvas, result: Result, drawOptions?: Partial<DrawOptions>) => Promise<null | [void, void, void, void, void]>; body: (inCanvas: AnyCanvas, result: BodyResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; canvas: (input: AnyCanvas | HTMLImageElement | HTMLMediaElement | HTMLVideoElement, output: AnyCanvas) => Promise<void>; face: (inCanvas: AnyCanvas, result: FaceResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; gesture: (inCanvas: AnyCanvas, result: GestureResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; hand: (inCanvas: AnyCanvas, result: HandResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; object: (inCanvas: AnyCanvas, result: ObjectResult[], drawOptions?: Partial<DrawOptions>) => Promise<void>; options: DrawOptions; person: (inCanvas: AnyCanvas, result: PersonResult[], drawOptions?: Partial<DrawOptions>) => Promise<void> }

Draw helper classes that can draw detected objects on canvas using specified draw

  • canvas: draws input to canvas
  • options: are global settings for all draw operations, can be overriden for each draw method DrawOptions
  • face, body, hand, gesture, object, person: draws detected results as overlays on canvas

Type declaration

env

env: Env

Object containing environment information used for diagnostics

events

events: undefined | EventTarget

Container for events dispatched by Human Possible events:

  • create: triggered when Human object is instantiated
  • load: triggered when models are loaded (explicitly or on-demand)
  • image: triggered when input image is processed
  • result: triggered when detection is complete
  • warmup: triggered when warmup is complete
  • error: triggered on some errors

faceTriangulation

faceTriangulation: number[]

Reference face triangualtion array of 468 points, used for triangle references between points

faceUVMap

faceUVMap: [number, number][]

Refernce UV map of 468 values, used for 3D mapping of the face mesh

gl

gl: Record<string, unknown>

WebGL debug info

match

match: (descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions) => { distance: number; index: number; similarity: number } = match.match

Type declaration

    • (descriptor: Descriptor, descriptors: Descriptor[], options?: MatchOptions): { distance: number; index: number; similarity: number }
    • Matches given descriptor to a closest entry in array of descriptors

      Parameters

      • descriptor: Descriptor

        face descriptor

      • descriptors: Descriptor[]

        array of face descriptors to commpare given descriptor to

      • options: MatchOptions = ...

        see similarity Returns

        • index index array index where best match was found or -1 if no matches
        • distance calculated distance of given descriptor to the best match
        • similarity calculated normalized similarity of given descriptor to the best match

      Returns { distance: number; index: number; similarity: number }

      • distance: number
      • index: number
      • similarity: number

performance

performance: Record<string, number>

Performance object that contains values for all recently performed operations

process

process: { canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }

currenty processed image tensor and canvas

Type declaration

result

result: Result

Last known result of detect run

  • Can be accessed anytime after initial detection

similarity

similarity: (descriptor1: Descriptor, descriptor2: Descriptor, options?: MatchOptions) => number = match.similarity

Type declaration

    • Calculates normalized similarity between two face descriptors based on their distance

      Parameters

      • descriptor1: Descriptor
      • descriptor2: Descriptor
      • options: MatchOptions = ...

        calculation options

        • order - algorithm to use Euclidean distance if order is 2 (default), Minkowski distance algorithm of nth order if order is higher than 2
        • multiplier - by how much to enhance difference analysis in range of 1..100 default is 20 which normalizes results to similarity above 0.5 can be considered a match
        • min - normalize similarity result to a given range
        • max - normalzie similarity resutl to a given range default is 0.2...0.8 Returns similarity between two face descriptors normalized to 0..1 range where 0 is no similarity and 1 is perfect similarity

      Returns number

state

state: string

Current state of Human library

  • Can be polled to determine operations that are currently executed
  • Progresses through: 'config', 'check', 'backend', 'load', 'run:', 'idle'

tf

tf: any

Instance of TensorFlow/JS used by Human

version

version: string

Current version of Human library in semver format

Methods

analyze

  • analyze(...msg: string[]): void
  • internal function to measure tensor leaks

    Parameters

    • Rest ...msg: string[]

    Returns void

compare

  • Compare two input tensors for pixel simmilarity

    • use human.image to process any valid input and get a tensor that can be used for compare
    • when passing manually generated tensors:
    • both input tensors must be in format [1, height, width, 3]
    • if resolution of tensors does not match, second tensor will be resized to match resolution of the first tensor
    • return value is pixel similarity score normalized by input resolution and rgb channels

    Parameters

    Returns Promise<number>

detect

emit

  • emit(event: string): void

enhance

  • Enhance method performs additional enhacements to face image previously detected for futher processing

    Parameters

    • input: Tensor<Rank>

      Tensor as provided in human.result.face[n].tensor

    Returns null | Tensor<Rank>

    Tensor

image

  • Process input as return canvas and tensor

    Parameters

    • input: Input

      any input Input

    • getTensor: boolean = true

      should image processing also return tensor or just canvas Returns object with tensor and canvas

    Returns Promise<{ canvas: null | AnyCanvas; tensor: null | Tensor<Rank> }>

init

  • init(): Promise<void>
  • Explicit backend initialization

    • Normally done implicitly during initial load phase
    • Call to explictly register and initialize TFJS backend without any other operations
    • Use when changing backend during runtime

    Returns Promise<void>

load

  • load(userConfig?: Partial<Config>): Promise<void>
  • Load method preloads all configured models on-demand

    • Not explicitly required as any required model is load implicitly on it's first run

    Parameters

    Returns Promise<void>

next

  • Runs interpolation using last known result and returns smoothened result Interpolation is based on time since last known result so can be called independently

    Parameters

    • result: Result = ...

      Result optional use specific result set to run interpolation on

    Returns Result

    result - Result

now

  • now(): number
  • Utility wrapper for performance.now()

    Returns number

profile

  • profile(input: Input, userConfig?: Partial<Config>): Promise<Record<string, number>>
  • Run detect with tensorflow profiling

    • result object will contain total exeuction time information for top-20 kernels
    • actual detection object can be accessed via human.result

    Parameters

    Returns Promise<Record<string, number>>

reset

  • reset(): void

segmentation

  • Segmentation method takes any input and returns processed canvas with body segmentation

    • Segmentation is not triggered as part of detect process

    Parameters

    • input: Input
    • Optional background: Input

      Input

      • Optional parameter background is used to fill the background with specific input Returns:
      • data as raw data array with per-pixel segmentation values
      • canvas as canvas which is input image filtered with segementation data and optionally merged with background image. canvas alpha values are set to segmentation values for easy merging
      • alpha as grayscale canvas that represents segmentation alpha values

    Returns Promise<{ alpha: null | AnyCanvas; canvas: null | AnyCanvas; data: Tensor<Rank> | number[] }>

validate

  • validate(userConfig?: Partial<Config>): { expected?: string; reason: string; where: string }[]
  • Validate current configuration schema

    Parameters

    • Optional userConfig: Partial<Config>

    Returns { expected?: string; reason: string; where: string }[]

warmup

  • Warmup method pre-initializes all configured models for faster inference

    • can take significant time on startup
    • only used for webgl and humangl backends

    Parameters

    Returns Promise<Result>

    result - Result