这个bug困扰了我一天,最后在stack overflow上看到解决的办法,并且成功解决!特此记录一下
首先这个问题出现的原因是:我喂养的数据的形状是(256,18),但实际需要的形状是(?,1),不匹配,需要统一。
我先按照这个思路修改我的代码
先贴上原来报错的代码
saver = tf.train.Saver()
for epoch_i in range(num_epochs):
train_X,test_X,train_y,test_y = train_test_split(features,
targets_values,
test_size=0.2,
random_state=0)
train_batches = get_batches(train_X,train_y,batch_size)
test_batches = get_batches(test_X,test_y,batch_size)
for batch_i in range(len(train_X)):
x,y = next(train_batches)
categories = np.zeros([batch_size,18])
for i in range(batch_size):
categories[i] = x.take(6,1)[i]
titles = np.zeros([batch_size,sentences_size])
for i in range(batch_size):
titles[i] = x.take(5,1)[i]
feed = {
uid:np.reshape(x.take(0,1),[batch_size,1]),
user_gender:np.reshape(x.take(2,1),[batch_size,1]),
user_age:np.reshape(x.take(3,1),[batch_size,1]),
user_job:np.reshape(x.take(4,1),[batch_size,1]),
movie_id:np.reshape(x.take(1,1),[batch_size,1]),
movie_categories:categories.reshape(-1,1),
movie_titles:titles.reshape(-1,1),
targets:np.reshape(y,[batch_size,1]),
dropout_keep_prob:dropout_keep,
lr:learning_rate
}
step,train_loss,summaries,_ = sess.run([global_step,loss,train_summary_op,train_op],feed)
losses['train'].append(train_loss)
报错的地方在于sess.run([],feed)这一行
更改后:(在feed修改的,贴修改部分代码)
movie_categories:categories.reshape(-1,1),
movie_titles:titles.reshape(-1,1),
!!这里我修改了两次,其实一开始我修改成.reshape(batch_size,1) // batch_size=256 然后报错:
cannot reshape array of size 4608 into shape (256,1)
后来我查了相关资料,是因为reshape有一个属性,更改后的形状必须相乘等于原始数据,这里4608 ≠ 256*1
因此,我更改成了-1. 为什么写-1? 因为,在tensor里,写-1代表计算机可以自行计算,这里也可以手动写成4608,但当更改多处时,-1比较方便,且注意一个tensor仅可出现一个-1(不太确定,好像是~)
|