You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
raceback (most recent call last): File "preprocess.py", line 74, in <module> eval('data_builder.'+args.mode + '(args)') File "<string>", line 1, in <module> File "D:\Text-Summarization-with-Pretrained-Encoders\PreSumm-master\src\prepro\data_builder.py", line 133, in tokenize subprocess.call(command) File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 339, in call with Popen(*popenargs, **kwargs) as p: File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] 。
when I try to prosess the step 3 in windows, it hints fileNotFound .And my command is ''python preprocess.py -mode tokenize -raw_path ../raw_stories -save_path ../merged_stories_tokenized"
The text was updated successfully, but these errors were encountered:
raceback (most recent call last): File "preprocess.py", line 74, in <module> eval('data_builder.'+args.mode + '(args)') File "<string>", line 1, in <module> File "D:\Text-Summarization-with-Pretrained-Encoders\PreSumm-master\src\prepro\data_builder.py", line 133, in tokenize subprocess.call(command) File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 339, in call with Popen(*popenargs, **kwargs) as p: File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "D:\ProgramData\Anaconda3\envs\py37-11-2\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] 。
when I try to prosess the step 3 in windows, it hints fileNotFound .And my command is ''python preprocess.py -mode tokenize -raw_path ../raw_stories -save_path ../merged_stories_tokenized"
The text was updated successfully, but these errors were encountered: