提问者:小点点

缩放作业失败与VerifyErrorEMR版本4.2.0


我们有一个Scalding作业,我想使用发布标签4.2.0在Elastic MapReduceAWS上运行。

这项工作在AMI 2.4.2上成功运行。当我们将其升级到AMI 3.7.0时,我们遇到了由不兼容的jar引起的java. lang.VerifyError。我们的项目使用1.5版的Comms-codec库,但更早的时候,不兼容的版本随AMI而来。类似地,我们的项目使用Scala 2.10,但2.11版随AMI而来。我们通过添加引导脚本来删除所有匹配Comms-codec-1的文件来解决这个问题。[234]。jarscala-库-2.11。*。从集群中jar

现在我们再次想要升级到4.2.0,并再次获得VerifyError:

```
Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.twitter.scalding.Job$.apply(Job.scala:47)
    at com.twitter.scalding.Tool.getJob(Tool.scala:48)
    at com.twitter.scalding.Tool.run(Tool.scala:68)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at com.snowplowanalytics.snowplow.enrich.hadoop.JobRunner$.main(JobRunner.scala:33)
    at com.snowplowanalytics.snowplow.enrich.hadoop.JobRunner.main(JobRunner.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
    com/snowplowanalytics/snowplow/enrich/common/utils/ConversionUtils$.decodeBase64Url(Ljava/lang/String;Ljava/lang/String;)Lscalaz/Validation; @5: invokevirtual
  Reason:
    Type 'org/apache/commons/codec/binary/Base64' (current frame, stack[0]) is not assignable to 'org/apache/commons/codec/binary/BaseNCodec'
  Current Frame:
    bci: @5
    flags: { }
    locals: { 'com/snowplowanalytics/snowplow/enrich/common/utils/ConversionUtils$', 'java/lang/String', 'java/lang/String' }
    stack: { 'org/apache/commons/codec/binary/Base64', 'java/lang/String' }
  Bytecode:
    0000000: 2ab7 008a 2cb6 0090 3a04 bb00 5459 1904
    0000010: b200 96b7 0099 3a05 b200 9e19 05b9 00a4
    0000020: 0200 b900 aa01 00a7 003e 4eb2 009e bb00
    0000030: ac59 b200 4112 aeb6 00b1 b700 b4b2 0041
    0000040: 06bd 0004 5903 2b53 5904 2c53 5905 2db6
    0000050: 00b9 53b6 00bf b900 c502 00b9 00a4 0200
    0000060: b900 c801 00b0                         
  Exception Handler Table:
    bci [0, 42] => handler: 42
  Stackmap Table:
    same_locals_1_stack_item_frame(@42,Object[#182])
    same_locals_1_stack_item_frame(@101,Object[#206])

    at com.snowplowanalytics.snowplow.enrich.hadoop.EtlJobConfig$.com$snowplowanalytics$snowplow$enrich$hadoop$EtlJobConfig$$base64ToJsonNode(EtlJobConfig.scala:224)
    at com.snowplowanalytics.snowplow.enrich.hadoop.EtlJobConfig$.loadConfigAndFilesToCache(EtlJobConfig.scala:126)
    at com.snowplowanalytics.snowplow.enrich.hadoop.EtlJob.(EtlJob.scala:139)
    ... 16 more
```

探索清除后集群上仍保留哪些jar:

$ sudo find / -name "*scala-*"
/usr/share/aws/emr/emrfs/cli/lib/scala-library-2.10.5.jar
/usr/share/aws/emr/emrfs/cli/lib/scala-reflect-2.10.4.jar
/usr/share/aws/emr/emrfs/cli/lib/scala-logging-api_2.10-2.1.2.jar
/usr/share/aws/emr/emrfs/cli/lib/nscala-time_2.10-1.2.0.jar
/usr/share/aws/emr/emrfs/cli/lib/scala-logging-slf4j_2.10-2.1.2.jar
$ sudo find / -name "*commons-codec*"
/usr/share/aws/emr/node-provisioner/lib/commons-codec-1.9.jar
/usr/share/aws/emr/emr-metrics/lib/commons-codec-1.6.jar
/usr/share/aws/emr/emr-metrics-client/lib/commons-codec-1.6.jar
/usr/share/aws/emr/emrfs/lib/commons-codec-1.9.jar
/usr/share/aws/emr/hadoop-state-pusher/lib/commons-codec-1.8.jar
/usr/lib/hbase/lib/commons-codec-1.7.jar
/usr/lib/mahout/lib/commons-codec-1.7.jar

AMI 4.1.0也会出现同样的错误。3.7.0和4. x.x之间发生了什么变化导致了此问题,我可以做些什么来修复它?


共1个答案

匿名用户

最后,我在引导步骤中添加了以下逻辑:

wget 'http://central.maven.org/maven2/commons-codec/commons-codec/1.5/commons-codec-1.5.jar'
sudo mkdir -p /usr/lib/hadoop/lib
sudo cp commons-codec-1.5.jar /usr/lib/hadoop/lib/remedial-commons-codec-1.5.jar
rm commons-codec-1.5.jar

这将从Maven下载jar的正确版本,并将其放在失败作业步骤的类路径的开头,在这里它优先于jar的其他版本。

有更清洁的解决方案吗?