提问者:小点点

Apache Beam数据流:“NoneType”对象没有属性“part”


我正在尝试编写一个管道来从pubsub读取流并使用带有apache光束的google cloud数据流将其写入bigquery。我有这段代码:

import apache_beam as beam
from apache_beam.transforms.window import FixedWindows

topic = 'projects/???/topics/???'
table = '???.???'

gcs_path = "gs://???"

with beam.Pipeline(runner="DataflowRunner", argv=[
        "--project", "???",
        "--staging_location", ("%s/staging_location" % gcs_path),
        "--temp_location", ("%s/temp" % gcs_path),
        "--output", ("%s/output" % gcs_path)
    ]) as p:
    (p 
    | 'winderow' >> beam.WindowInto(FixedWindows(60))
    | 'hello' >> beam.io.gcp.pubsub.ReadStringsFromPubSub(topic) 
    | 'hello2' >> beam.io.Write(beam.io.gcp.bigquery.BigQuerySink(table))
    )
    p.run().wait_until_finish()

但是我在运行它时收到这个错误:

No handlers could be found for logger "oauth2client.contrib.multistore_file"
ERROR:root:Error while visiting winderow
Traceback (most recent call last):
  File ".\main.py", line 20, in <module>
    p.run().wait_until_finish()
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\pipeline.py", line 339, in run
    return self.runner.run(self)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\dataflow\dataflow_runner.py", line 296, in run
    super(DataflowRunner, self).run(pipeline)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\runner.py", line 138, in run
    pipeline.visit(RunVisitor(self))
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\pipeline.py", line 367, in visit
    self._root_transform().visit(visitor, self, visited)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\pipeline.py", line 710, in visit
    part.visit(visitor, pipeline, visited)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\pipeline.py", line 713, in visit
    visitor.visit_transform(self)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\runner.py", line 133, in visit_transform
    self.runner.run_transform(transform_node)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\runner.py", line 176, in run_transform
    return m(transform_node)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\dataflow\dataflow_runner.py", line 526, in run_ParDo
    input_step = self._cache.get_pvalue(transform_node.inputs[0])
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\runner.py", line 252, in get_pvalue
    self._ensure_pvalue_has_real_producer(pvalue)
  File "C:\ProgramData\Anaconda2\lib\site-packages\apache_beam\runners\runner.py", line 226, in _ensure_pvalue_has_real_producer
    while real_producer.parts:
AttributeError: 'NoneType' object has no attribute 'parts'

这是代码或配置的问题吗?如何让它工作?


共1个答案

匿名用户

我还没有使用窗口管道的经验,但就我从概念中理解的而言,窗口应该应用于您的输入数据,而不是作为管道设置。

在这种情况下,您的代码可能应该是:

with beam.Pipeline(runner="DataflowRunner", argv=[
        "--project", "???",
        "--staging_location", ("%s/staging_location" % gcs_path),
        "--temp_location", ("%s/temp" % gcs_path),
        "--output", ("%s/output" % gcs_path)
    ]) as p:
    (p 
    | 'hello' >> beam.io.gcp.pubsub.ReadStringsFromPubSub(topic) 
    | 'winderow' >> beam.WindowInto(FixedWindows(60))
    | 'hello2' >> beam.io.Write(beam.io.gcp.bigquery.BigQuerySink(table))
    )
    p.run().wait_until_finish()

官方repo也有一些关于窗口操作的示例。