When deep belief network is implemented for representation learning, I'm confused about the representation of hidden layers for the original data matrix.
The method sigmoid_layers[-1].output seems doesn't work with no representation for the matrix acquired except 0.
Has anybody encountered such confusion?
When deep belief network is implemented for representation learning, I'm confused about the representation of hidden layers for the original data matrix.
The method sigmoid_layers[-1].output seems doesn't work with no representation for the matrix acquired except 0.
Has anybody encountered such confusion?