dlpy.image_captioning.create_embeddings_from_object_detection

dlpy.image_captioning.create_embeddings_from_object_detection(conn, image_table, detection_model, word_embeddings_file, n_threads=None, gpu=None, max_objects=5, word_delimiter='\t')

Builds CASTable with objects detected in images as numeric data

Parameters
connCAS

Specifies the CAS connection object.

image_table: imageTable

Specifies name of CASTable that contains images to be used for training

detection_modelCASTable or string

Specifies CASTable containing model parameters for the object detection model

word_embeddings_filestring

Specifies full path to file containing pre-trained word vectors to be used for text generation This file should be accessible from the client.

n_threadsint, optional

Specifies the number of threads to use when scoring the table. All cores available used when nothing is set. Default : None

gpuGpu, optional

When specified, specifies which gpu to use when scoring the table. GPU=1 uses all available GPU devices and default parameters. Default : None

max_objectsint, optional

Specifies max number of objects detected if less than five Default : 5

word_delimiterstring, optional

Specifies delimiter used in word_embeddings file Default : ‘ ‘

Returns
——-
:class:`CASTable`