You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great work! We were just getting started with the code and were wondering how exactly you generated the templates for the GYAFC dataset. We have all the data labeled with train.0, train.1, ... and we tried to run the get_template_based_result.py command in the terminal. However, we kept getting error
Starting search for file: ../data/GYAFC/template_result/replace_result.test.0-1.tsf
FileNotFoundError: [Errno 2] No such file or directory: '../data/GYAFC/template_result/replace_result.test.0-1.tsf'
Also, when we train the DualRL model, will it automatically use the tsf files? And are the tsf files optional? What other ways do you recommend that we can preprocess it more simply? We were also a little confused about the point of the tsf files / pseudoparallel data in the README and when they came into play. Aren't we trying to do this entirely without parallel data?
Sorry for all the questions, looking forward to hearing back!
The text was updated successfully, but these errors were encountered:
Hi, thanks for your great work! We were just getting started with the code and were wondering how exactly you generated the templates for the GYAFC dataset. We have all the data labeled with train.0, train.1, ... and we tried to run the
get_template_based_result.py
command in the terminal. However, we kept getting errorAre we supposed to use this to generate the templates? https://github.com/lijuncen/Sentiment-and-Style-Transfer I could not see in the bash script where it generated the tsf files.
Also, when we train the DualRL model, will it automatically use the tsf files? And are the tsf files optional? What other ways do you recommend that we can preprocess it more simply? We were also a little confused about the point of the tsf files / pseudoparallel data in the README and when they came into play. Aren't we trying to do this entirely without parallel data?
Sorry for all the questions, looking forward to hearing back!
The text was updated successfully, but these errors were encountered: