Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: java.io.IOException: wrong key class: hadoop_join.TextTuple is not class org.apache.hadoop.io.Text #6

Open
kitianFresh opened this issue Jan 19, 2018 · 0 comments

Comments

@kitianFresh
Copy link

kitianFresh commented Jan 19, 2018

I copy your hadoop join example and run , but it raise error, I can not figure it out. following is some logs, and my env is hadoop 2.7.5

18/01/19 22:54:28 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.1.13:8032
18/01/19 22:54:29 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/01/19 22:54:29 INFO input.FileInputFormat: Total input paths to process : 1
18/01/19 22:54:29 INFO input.FileInputFormat: Total input paths to process : 1
18/01/19 22:54:29 INFO mapreduce.JobSubmitter: number of splits:2
18/01/19 22:54:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1516287880873_0006
18/01/19 22:54:30 INFO impl.YarnClientImpl: Submitted application application_1516287880873_0006
18/01/19 22:54:30 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1516287880873_0006/
18/01/19 22:54:30 INFO mapreduce.Job: Running job: job_1516287880873_0006
18/01/19 22:54:38 INFO mapreduce.Job: Job job_1516287880873_0006 running in uber mode : false
18/01/19 22:54:38 INFO mapreduce.Job:  map 0% reduce 0%
18/01/19 22:54:47 INFO mapreduce.Job:  map 100% reduce 0%
18/01/19 22:54:52 INFO mapreduce.Job: Task Id : attempt_1516287880873_0006_r_000000_0, Status : FAILED
Error: java.io.IOException: wrong key class: hadoop_join.TextTuple is not class org.apache.hadoop.io.Text
	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1375)
	at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
	at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
	at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
	at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
	at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant