Perform model loading and inference concurrently or sequentially
default: true
Backend used for TFJS operations valid build-in backends are:
cpu
, wasm
, webgl
, humangl
, webgpu
cpu
, wasm
, tensorflow
default: webgl
for browser and tensorflow
for nodejsBody config BodyConfig
Cache models in IndexDB on first sucessfull load default: true if indexdb is available (browsers), false if its not (nodejs)
Cache sensitivity
default: 0.7
Perform immediate garbage collection on deallocated tensors instead of caching them
Print debug statements to console
default: true
Face config FaceConfig
Filter config FilterConfig
Explicit flags passed to initialize TFJS
Gesture config GestureConfig
Hand config HandConfig
Base model path (typically starting with file://, http:// or https://) for all models
default: ../models/
for browsers and file://models/
for nodejs
Object config ObjectConfig
Segmentation config SegmentationConfig
Internal Variable
Software Kernels Registers software kernel ops running on CPU when accelerated version of kernel is not found in the current backend
Validate kernel ops used in model during model load default: true any errors will be printed on console but will be treated as non-fatal
What to use for human.warmup()
webgl
, humangl
and webgpu
backendsdefault: full
Path to *.wasm files if backend is set to wasm
default: auto-detects to link to CDN jsdelivr
when running in browser
Force WASM loader to use platform fetch
default: false
Configuration interface definition for Human library Contains all configurable parameters Defaults: config