cutebaby20bk
New Member
i did\[code\]bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar -inputreader "StreamXmlRecordReader, begin=<metaData>,end=</metaData>" -input /user/root/xmlpytext/metaData.xml -mapper /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -reducer /Users/amrita/desktop/hadoop/pythonpractise/reducerxml.py -file /Users/amrita/desktop/hadoop/pythonpractise/mapperxml.py -output /user/root/xmlpytext-output1 -numReduceTasks 1\[/code\]but it shows \[code\]13/03/22 09:38:48 INFO mapred.FileInputFormat: Total input paths to process : 113/03/22 09:38:49 INFO streaming.StreamJob: getLocalDirs(): [/Users/amrita/desktop/hadoop/temp/mapred/local]13/03/22 09:38:49 INFO streaming.StreamJob: Running job: job_201303220919_000113/03/22 09:38:49 INFO streaming.StreamJob: To kill this job, run:13/03/22 09:38:49 INFO streaming.StreamJob: /private/var/root/hadoop-1.0.4/libexec/../bin/hadoop job -Dmapred.job.tracker=-kill job_201303220919_000113/03/22 09:38:49 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201303220919_000113/03/22 09:38:50 INFO streaming.StreamJob: map 0% reduce 0%13/03/22 09:39:26 INFO streaming.StreamJob: map 100% reduce 100%13/03/22 09:39:26 INFO streaming.StreamJob: To kill this job, run:13/03/22 09:39:26 INFO streaming.StreamJob: /private/var/root/hadoop-1.0.4/libexec/../bin/hadoop job -Dmapred.job.tracker=-kill job_201303220919_000113/03/22 09:39:26 INFO streaming.StreamJob: Tracking URL: http:///jobdetails.jsp?jobid=job_201303220919_000113/03/22 09:39:26 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201303220919_0001_m_00000013/03/22 09:39:26 INFO streaming.StreamJob: killJob...Streaming Command Failed!\[/code\]when i went through jobdetails.jsp there it shows \[code\]java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.streaming.StreamInputFormat.getRecordReader(StreamInputFormat.java:77) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.mapred.Child.main(Child.java:249)Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.streaming.StreamInputFormat.getRecordReader(StreamInputFormat.java:74) ... 8 moreCaused by: java.io.IOException: JobConf: missing required property: stream.recordreader.begin at org.apache.hadoop.streaming.StreamXmlRecordReader.checkJobGet(StreamXmlRecordReader.java:278) at org.apache.hadoop.streaming.StreamXmlRecordReader.<init>(StreamXmlRecordReader.java:52) ... 13 more\[/code\]my mapper\[code\]#!/usr/bin/env pythonimport sysimport cStringIOimport xml.etree.ElementTree as xmldef cleanResult(element): result = None if element is not None: result = element.text result = result.strip() else: result = "" return resultdef process(val): root = xml.fromstring(val) sceneID = cleanResult(root.find('sceneID')) cc = cleanResult(root.find('cloudCover')) returnval = ("%s,%s") % (sceneID,cc) return returnval.strip()if __name__ == '__main__': buff = None intext = False for line in sys.stdin: line = line.strip() if line.find("<metaData>") != -1: intext = True buff = cStringIO.StringIO() buff.write(line) elif line.find("</metaData>") != -1: intext = False buff.write(line) val = buff.getvalue() buff.close() buff = None print process(val) else: if intext: buff.write(line)\[/code\]and reducer\[code\]#!/usr/bin/env pythonimport sysif __name__ == '__main__': for line in sys.stdin: print line.strip()\[/code\]can anyone tell me why this happens. I am using hadoop-1.0.4 im mac.Is there any thing wrong. Should i change any thing .pls help me out.