dlpy.image_captioning.ImageCaptioning

dlpy.image_captioning.ImageCaptioning(conn, model_name='image_captioning', num_blocks=3, neurons=50, rnn_type='LSTM', max_output_len=15)

Builds an RNN to be used for image captioning

Parameters
connCAS

Specifies the CAS connection object.

model_namestring, optional

Specifies output name of the model Default: ‘image_captioning’

num_blocksint, optional

Specifies number of samelength recurrent layers Default : 3

neuronsint, optional

Specifies number of neurons in each layer Default : 50

rnn_typestring, optional

Specifies the type of the rnn layer. Possible Values: RNN, LSTM, GRU Default: LSTM

max_output_lenint, optional

Specifies max number of tokens to generate in the final layer (i.e. max caption length) Default : 15

Returns
——-
:class:`CASTable`