为了账号安全,请及时绑定邮箱和手机立即绑定

使用 Golang 和 Docker 导入外部包时构建失败

使用 Golang 和 Docker 导入外部包时构建失败

Go
万千封印 2022-06-01 10:04:42
我无法使用 Docker 构建这个简单的融合 kafka 示例。可能是 go 路径或特殊构建参数的技巧,找不到,尝试了 go 中的所有默认文件夹,没有成功。DockerfileFROM golang:alpine AS builder# Set necessary environmet variables needed for our imageENV GO111MODULE=on \    CGO_ENABLED=0 \    GOOS=linux \    GOARCH=amd64ADD . /go/app# Install librdkafkaRUN apk add librdkafka-dev pkgconf# Move to working directory /buildWORKDIR /go/app# Copy and download dependency using go modCOPY go.mod .RUN go mod download# Copy the code into the containerCOPY . .# Build the applicationRUN go build -o main .# Run testRUN go test ./... -v# Move to /dist directory as the place for resulting binary folderWORKDIR /dist# Copy binary from build to main folderRUN cp /go/app/main .############################# STEP 2 build a small image############################FROM scratchCOPY --from=builder /dist/main /# Command to run the executableENTRYPOINT ["/main"]错误./producer_example.go:37:12: undefined: kafka.NewProducer./producer_example.go:37:31: undefined: kafka.ConfigMap./producer_example.go:48:28: undefined: kafka.Event./producer_example.go:51:19: undefined: kafka.Message
查看完整描述

1 回答

?
米脂

TA贡献1836条经验 获得超3个赞

编辑

我可以确认使用musl构建标签有效:


FROM golang:alpine as build

WORKDIR /go/src/app

# Set necessary environmet variables needed for our image

ENV GOOS=linux GOARCH=amd64 

COPY . .

RUN apk update && apk add gcc librdkafka-dev openssl-libs-static zlib-static zstd-libs libsasl librdkafka-static lz4-dev lz4-static zstd-static libc-dev musl-dev 

RUN go build -tags musl -ldflags '-w -extldflags "-static"' -o main


FROM scratch

COPY --from=build /go/src/app/main /

# Command to run the executable

ENTRYPOINT ["/main"]

与测试设置一起使用,如下所示。


好吧,使用的 1.4.0 版本github.com/confluentinc/confluent-kafka-go/kafka似乎至少与 alpine 3.11 的当前状态普遍不兼容。此外,尽管我尽了最大的努力,我还是无法构建一个静态编译的二进制文件,适合与FROM scratch.


但是,我能够让您的代码在当前版本的 Kafka 上运行。图像有点大,但我想工作和大一点总比不工作和优雅要好。


待办事项

1.降级为confluent-kafka-go@v1.1.0

简单到


$ go get -u -v github.com/confluentinc/confluent-kafka-go@v1.1.0

2.修改你的Dockerfile

您一开始就缺少一些构建依赖项。显然,我们还需要一个运行时依赖项,因为我们不再使用FROM scratch。请注意,我还尝试简化它并留下jwilder/dockerize,我使用了它,这样我就不必为我的测试设置计时:


FROM golang:alpine as build


# The default location is /go/src

WORKDIR /go/src/app

ENV GOOS=linux \

    GOARCH=amd64

# We simply copy everything to /go/src/app    

COPY . .

# Add the required build libraries

RUN apk update && apk add gcc librdkafka-dev zstd-libs libsasl lz4-dev libc-dev musl-dev 

# Run the build

RUN go build -o main



FROM alpine

# We use dockerize to make sure the kafka sever is up and running before the command starts.

ENV DOCKERIZE_VERSION v0.6.1

ENV KAFKA kafka

# Add dockerize

RUN apk --no-cache upgrade && apk --no-cache --virtual .get add curl \

 && curl -L -O https://github.com/jwilder/dockerize/releases/download/${DOCKERIZE_VERSION}/dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \

 && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \

 && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \

 && apk del .get \

 # Add the runtime dependency.

 && apk add --no-cache librdkafka

# Fetch the binary 

COPY --from=build /go/src/app/main /

# Wait for kafka to come up, only then start /main

ENTRYPOINT ["sh","-c","/usr/local/bin/dockerize -wait tcp://${KAFKA}:9092 /main kafka test"]

3. 测试它

我创建了一个docker-compose.yaml来检查一切是否正常:


version: "3.7"


services:

  zookeeper:

    image: 'bitnami/zookeeper:3'

    ports:

      - '2181:2181'

    volumes:

      - 'zookeeper_data:/bitnami'

    environment:

      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:

    image: 'bitnami/kafka:2'

    ports:

      - '9092:9092'

    volumes:

      - 'kafka_data:/bitnami'

    environment:

      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181

      - ALLOW_PLAINTEXT_LISTENER=yes

    depends_on:

      - zookeeper

  server:

    image: fals/kafka-main

    build: .

    command: "kafka test"

volumes:

  zookeeper_data:

  kafka_data:

您可以检查设置是否适用于:


$  docker-compose build && docker-compose up -d && docker-compose logs -f server

[...]

server_1     | 2020/04/18 18:37:33 Problem with dial: dial tcp 172.24.0.4:9092: connect: connection refused. Sleeping 1s

server_1     | 2020/04/18 18:37:34 Connected to tcp://kafka:9092

server_1     | Created Producer rdkafka#producer-1

server_1     | Delivered message to topic test [0] at offset 0

server_1     | 2020/04/18 18:37:36 Command finished successfully.

kfka_server_1 exited with code 0


查看完整回答
反对 回复 2022-06-01
  • 1 回答
  • 0 关注
  • 113 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
意见反馈 帮助中心 APP下载
官方微信