You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, I tried re-running your code without changing any hyper-parameters, however, I got this result: 0-1_Test(Batch:1600) Senti:77.300 BLEU(4ref):61.125(A:55.520+B:66.730) G-score:68.738 H-score:68.267 Cost time:2.57.
Could you provide some experience about how to tune the hyper-parameters so that I can balance between the sentiment accuracy and the BLEU score? Thank you so much!
The text was updated successfully, but these errors were encountered:
Note: The printed logs just show the results on one test set. 0-1_Test..... is the performance of test.0 and 1-0_Test.... is the performance of test.1.
I thought in the result log: A:55.520+B:66.730, A and B correspond to two directions: 0->1 and 1->0 and the BLEU score is the average of them, am I right?
hi, Fuli, I have tried increasing the context reward coefficient from 0.25 to 1.0 and the highest sentiment accuracy I got is 78.7%, should I further increase this coefficient? Or I should tune other hyper-parameters to increase the sentiment accuracy to what is reported in the paper? Thank you so much!
hi, I tried re-running your code without changing any hyper-parameters, however, I got this result: 0-1_Test(Batch:1600) Senti:77.300 BLEU(4ref):61.125(A:55.520+B:66.730) G-score:68.738 H-score:68.267 Cost time:2.57.
Could you provide some experience about how to tune the hyper-parameters so that I can balance between the sentiment accuracy and the BLEU score? Thank you so much!
The text was updated successfully, but these errors were encountered: