You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[zhouhh@Hadoop48 examples]$ dumbo start wordcount.py -hadoop /home/zhouhh/hadoop-1.0.3 -input input1 -output output1
zhh parse argv: ['/usr/local/bin/dumbo', 'start', 'wordcount.py', '-hadoop', '/home/zhouhh/hadoop-1.0.3', '-input', 'input1', '-output', 'output1']
zhh sysargv: ['wordcount.py', '-prog', 'wordcount.py', '-input', 'input1', '-hadoop', '/home/zhouhh/hadoop-1.0.3', '-output', 'output1']
zhh parse argv: ['wordcount.py', '-prog', 'wordcount.py', '-input', 'input1', '-hadoop',/home/zhouhh/hadoop-1.0.3/contrib/streaming/hadoop-streaming-1.0.3.jar -mapper 'python -m wordcount map 0 262144000' -outputformat 'org.apache.hadoop.mapred.SequenceFileOutputFormat' -inputformat 'org.apache.hadoop.streaming.AutoInputFormat' -reducer 'python -m wordcount red 0 262144000' -file '/home/zhouhh/dumbo/examples/wordcount.py' -file '/usr/local/lib/python2.7/site-packages/dumbo-0.21.33-py2.7.egg' -file '/usr/local/lib/python2.7/site-packages/typedbytes-0.3.8-py2.7.egg' -output 'output1' -jobconf 'mapred.job.name=wordcount.py (1/1)' -jobconf 'stream.map.input=typedbytes' -jobconf 'stream.map.output=typedbytes' -jobconf 'stream.reduce.input=typedbytes' -jobconf 'stream.reduce.output=typedbytes' -input 'input1' -cmdenv 'PYTHONPATH=typedbytes-0.3.8-py2.7.egg:dumbo-0.21.33-py2.7.egg' -cmdenv 'dumbo_jk_class=dumbo.backends.common.JoinKey' -cmdenv 'dumbo_mrbase_class=dumbo.backends.common.MapRedBase' -cmdenv 'dumbo_runinfo_class=dumbo.backends.streaming.StreamingRunInfo'
12/06/04 11:05:50 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
packageJobJar: [/home/zhouhh/dumbo/examples/wordcount.py, /usr/local/lib/python2.7/site-packages/dumbo-0.21.33-py2.7.egg, /usr/local/lib/python2.7/site-packages/typedbytes-0.3.8-py2.7.egg, /tmp/hadoop-zhouhh/hadoop-unjar3854141277187450123/] [] /tmp/streamjob1411647651951820963.jar tmpDir=null
12/06/04 11:05:51 INFO mapred.FileInputFormat: Total input paths to process : 1
12/06/04 11:05:51 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-zhouhh/mapred/local]
12/06/04 11:05:51 INFO streaming.StreamJob: Running job: job_201205231824_0007
12/06/04 11:05:51 INFO streaming.StreamJob: To kill this job, run:
12/06/04 11:05:51 INFO streaming.StreamJob: /home/zhouhh/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=Hadoop48:54311 -kill job_201205231824_0007
12/06/04 11:05:51 INFO streaming.StreamJob: Tracking URL: http://Hadoop48:50030/jobdetails.jsp?jobid=job_201205231824_0007
12/06/04 11:05:52 INFO streaming.StreamJob: map 0% reduce 0%
12/06/04 11:06:34 INFO streaming.StreamJob: map 100% reduce 100%
12/06/04 11:06:34 INFO streaming.StreamJob: To kill this job, run:
12/06/04 11:06:34 INFO streaming.StreamJob: /home/zhouhh/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=Hadoop48:54311 -kill job_201205231824_0007
12/06/04 11:06:34 INFO streaming.StreamJob: Tracking URL: http://Hadoop48:50030/jobdetails.jsp?jobid=job_201205231824_0007
12/06/04 11:06:34 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205231824_0007_m_000000
12/06/04 11:06:34 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
but if i run hadoop examples word count, it runs ok.
[zhouhh@Hadoop48 examples]$ jps
26239 JobTracker
25949 NameNode
26144 SecondaryNameNode
18314 HMaster
24456 Jps
[zhouhh@Hadoop48 examples]$ dumbo start wordcount.py -hadoop /home/zhouhh/hadoop-1.0.3 -input input1 -output output1
zhh parse argv: ['/usr/local/bin/dumbo', 'start', 'wordcount.py', '-hadoop', '/home/zhouhh/hadoop-1.0.3', '-input', 'input1', '-output', 'output1']
zhh sysargv: ['wordcount.py', '-prog', 'wordcount.py', '-input', 'input1', '-hadoop', '/home/zhouhh/hadoop-1.0.3', '-output', 'output1']
zhh parse argv: ['wordcount.py', '-prog', 'wordcount.py', '-input', 'input1', '-hadoop',/home/zhouhh/hadoop-1.0.3/contrib/streaming/hadoop-streaming-1.0.3.jar -mapper 'python -m wordcount map 0 262144000' -outputformat 'org.apache.hadoop.mapred.SequenceFileOutputFormat' -inputformat 'org.apache.hadoop.streaming.AutoInputFormat' -reducer 'python -m wordcount red 0 262144000' -file '/home/zhouhh/dumbo/examples/wordcount.py' -file '/usr/local/lib/python2.7/site-packages/dumbo-0.21.33-py2.7.egg' -file '/usr/local/lib/python2.7/site-packages/typedbytes-0.3.8-py2.7.egg' -output 'output1' -jobconf 'mapred.job.name=wordcount.py (1/1)' -jobconf 'stream.map.input=typedbytes' -jobconf 'stream.map.output=typedbytes' -jobconf 'stream.reduce.input=typedbytes' -jobconf 'stream.reduce.output=typedbytes' -input 'input1' -cmdenv 'PYTHONPATH=typedbytes-0.3.8-py2.7.egg:dumbo-0.21.33-py2.7.egg' -cmdenv 'dumbo_jk_class=dumbo.backends.common.JoinKey' -cmdenv 'dumbo_mrbase_class=dumbo.backends.common.MapRedBase' -cmdenv 'dumbo_runinfo_class=dumbo.backends.streaming.StreamingRunInfo'
12/06/04 11:05:50 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
packageJobJar: [/home/zhouhh/dumbo/examples/wordcount.py, /usr/local/lib/python2.7/site-packages/dumbo-0.21.33-py2.7.egg, /usr/local/lib/python2.7/site-packages/typedbytes-0.3.8-py2.7.egg, /tmp/hadoop-zhouhh/hadoop-unjar3854141277187450123/] [] /tmp/streamjob1411647651951820963.jar tmpDir=null
12/06/04 11:05:51 INFO mapred.FileInputFormat: Total input paths to process : 1
12/06/04 11:05:51 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-zhouhh/mapred/local]
12/06/04 11:05:51 INFO streaming.StreamJob: Running job: job_201205231824_0007
12/06/04 11:05:51 INFO streaming.StreamJob: To kill this job, run:
12/06/04 11:05:51 INFO streaming.StreamJob: /home/zhouhh/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=Hadoop48:54311 -kill job_201205231824_0007
12/06/04 11:05:51 INFO streaming.StreamJob: Tracking URL: http://Hadoop48:50030/jobdetails.jsp?jobid=job_201205231824_0007
12/06/04 11:05:52 INFO streaming.StreamJob: map 0% reduce 0%
12/06/04 11:06:34 INFO streaming.StreamJob: map 100% reduce 100%
12/06/04 11:06:34 INFO streaming.StreamJob: To kill this job, run:
12/06/04 11:06:34 INFO streaming.StreamJob: /home/zhouhh/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=Hadoop48:54311 -kill job_201205231824_0007
12/06/04 11:06:34 INFO streaming.StreamJob: Tracking URL: http://Hadoop48:50030/jobdetails.jsp?jobid=job_201205231824_0007
12/06/04 11:06:34 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205231824_0007_m_000000
12/06/04 11:06:34 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
but if i run hadoop examples word count, it runs ok.
[zhouhh@Hadoop48 examples]$ jps
26239 JobTracker
25949 NameNode
26144 SecondaryNameNode
18314 HMaster
24456 Jps
[zhouhh@Hadoop48 examples]$ pwd
/home/zhouhh/dumbo/examples
The text was updated successfully, but these errors were encountered: