文章目录
- @[toc]
- 提前准备
- 什么是 Harbor
- Harbor 架构描述
- Harbor 安装的先决条件
- 硬件资源
- 软件依赖
- 端口依赖
- Harbor 在 k8s 的高可用
- Harbor 部署
- Helm 编排
- YAML 编排
- 创建 namespace
- 导入镜像
- 部署 Redis
- 部署 PostgreSQL
- 部署 Harbor core
- 部署 Harbor trivy
- 部署 Harbor jobservice
- 部署 Harbor registry
- 部署 Harbor portal
- 部署 Harbor exporter
- Harbor 的配置和验证
- 创建用户
- 创建项目
- 项目分配成员
- docker login 配置
- containerd 配置
- 推送镜像验证
- docker 验证
- containerd 验证
- 拉取镜像验证
- docker 验证
- containerd 验证
- 遗留问题
文章目录
- @[toc]
- 提前准备
- 什么是 Harbor
- Harbor 架构描述
- Harbor 安装的先决条件
- 硬件资源
- 软件依赖
- 端口依赖
- Harbor 在 k8s 的高可用
- Harbor 部署
- Helm 编排
- YAML 编排
- 创建 namespace
- 导入镜像
- 部署 Redis
- 部署 PostgreSQL
- 部署 Harbor core
- 部署 Harbor trivy
- 部署 Harbor jobservice
- 部署 Harbor registry
- 部署 Harbor portal
- 部署 Harbor exporter
- Harbor 的配置和验证
- 创建用户
- 创建项目
- 项目分配成员
- docker login 配置
- containerd 配置
- 推送镜像验证
- docker 验证
- containerd 验证
- 拉取镜像验证
- docker 验证
- containerd 验证
- 遗留问题
提前准备
- 涉及到镜像拉取的问题,建议提前去 github 下载好 harbor 的离线包,离线包里面包含了镜像,可以提前导入好,避免镜像拉取超时,下载地址:harbor-offline-installer-v2.11.1.tgz
- 我的实验环境是下面这两个博客部署的
- k8s 部署可以参考我之前的博客:openeuler 22.03 lts sp4 使用 kubeadm 部署 k8s-v1.28.2 高可用集群
- ingress 部署可以参考我之前的博客:k8s 1.28.2 集群部署 ingress 1.11.1 包含 admission-webhook
- MinIO 部署可以参考我之前的博客:k8s 1.28.2 集群部署 MinIO 分布式集群
什么是 Harbor
- Harbor 官网
- Harbor Github
- Harbor 是一个开源的制品仓库
- 相比较 docker registry,它可以通过策略和基于角色的访问控制来保护镜像
- Harbor 是 CNCF 毕业项目,提供合规性、性能和互操作性,帮助您跨 Kubernetes 和 Docker 等云原生计算平台一致、安全地管理镜像
Harbor 架构描述
- Architecture Overview of Harbor
Proxy
- 由 Nginx Server 组成的反向代理,提供 API 路由能力
- Harbor 的组件,如核心、注册中心、Web 门户和 Token 服务等,都位于这个反向代理的后面
Core
- Harbor 的核心服务,主要提供以下功能
API Server
:接受REST API请求并响应这些请求的HTTP服务器依赖于其子模块,如’身份验证和授权’,‘中间件’和’API处理程序’Config Manager
:涵盖所有系统配置的管理,如身份验证类型设置、电子邮件设置和证书等Project Management
:管理项目的基础数据和相应的元数据,创建该项目是为了隔离托管项目Quota Manage
:管理项目的配额设置,并在发生新推送时执行配额验证Chart Controller
:将 chart 相关请求代理到后端chartmuseum
,并提供多个扩展来改善 chart 管理体验Retention Manager
:管理标签保留策略并执行和监控标签保留流程Content Trust
:为后端 Notary 提供的信任能力添加扩展,以支持内容信任过程的顺利进行。目前仅支持对容器镜像进行签名Replication Controller
:管理复制策略和注册表适配器,触发和监控并发复制过程Scan Manager
:管理由不同提供商调整的多个已配置扫描程序,并为指定对象提供扫描摘要和报告Notification Manager(webhook)
:在 Harbor 中配置的机制,以便可以将 Harbor 中的工件状态更改填充到 Harbor 中配置的 Webhook 终端节点。相关方可以通过侦听相关的 webhook 事件来触发一些后续操作OCI Artifact Manager
:用于管理整个 Harbor 注册表中所有 OCI Artifact 生命周期的核心组件。它提供了 CRUD 操作来管理制品的元数据和相关添加,例如扫描报告、容器镜像和自述文件的构建历史、依赖项以及 helm 图表的 value.yaml 等,它还支持管理制品标签和其他有用的操作的能力Registry Driver
:实现为 Registry Client SDK,用于与底层 Registry 进行通信(目前为 docker 分发)。“OCI Artifact Manager” 依赖此驱动程序从清单中获取其他信息,甚至从位于底层注册表的指定对象的配置 JSON 中获取其他信息Job Service
- 通用作业执行队列服务,允许其他组件 / 服务使用简单的 restful API 同时提交运行异步任务的请求
Log collector
- 日志收集器,负责将其他模块的日志收集到一个地方
GC Controller
:管理在线 GC 计划设置,并启动和跟踪 GC 进度Chart Museum
:提供图表管理和访问 API 的第三方图表存储库服务器Docker Registry
:第三方注册服务器,负责存储 Docker 镜像和处理 Docker 推送 / 拉取命令。由于 Harbor 需要对镜像实施访问控制,Registry 会将客户端定向到 Token 服务,以获取每个 pull 或 push 请求的有效 TokenNotary
:第三方内容信任服务器,负责安全地发布和验证内容Web Portal
:一个图形用户界面,可帮助用户管理 Registry 上的镜像- 数据存储相关
k-v storage
:由 Redis 组成,提供数据缓存功能,并支持临时持久化 Job 服务的 Job 元数据data storage
:支持多种存储,作为 Registry 和 Chart Museum 的后端存储进行数据持久化(比如 s3 的 MinIO)Database
:存储 Harbor 模型的相关元数据,如项目、用户、角色、复制策略、标签保留策略、扫描仪、图表和图像。采用 PostgreSQL
- 下面是 Harbor 2.11.1 版本相关组件对应的版本
组件 | 版本 |
---|---|
Postgresql | 14.10 |
Redis | 7.2.2 |
Beego | 2.0.6 |
Distribution/Distribution | 2.8.3 |
Helm | 2.9.1 |
Swagger-ui | 5.9.1 |
Harbor 安装的先决条件
硬件资源
硬件类型 | 最小配置 | 推荐配置 |
---|---|---|
CPU | 2 CPU | 4 CPU |
内存 | 4 GB | 8 GB |
磁盘存储 | 40 GB | 160 GB |
软件依赖
软件 | 版本 | 描述 |
---|---|---|
Docker | 20.10.10-ce+ | Docker 安装手册 Docker Engine documentation |
Docker Compose | v1.18.0+ 或者 docker compose v2 (docker-compose-plugin) |
Docker Compose 安装手册 Docker Compose documentation |
OpenSSL | 越新越好 | 用于为 Harbor 生成证书和密钥 |
端口依赖
端口可以在配置文件中定义
端口 | 协议 | 描述 |
---|---|---|
443 | HTTPS | 用户访问页面和接口 api 的 https 请求 |
4443 | HTTPS | 与 Harbor 的 Docker Content Trust 服务的连接 |
80 | HTTP | 用户访问页面和接口 api 的 http 请求 |
Harbor 在 k8s 的高可用
- Harbor 的大部分组件现在是无状态的。因此,我们可以简单地增加 Pod 的副本,以确保组件分布到多个 worker 节点,并利用 K8S 的 “Service” 机制来确保 Pod 之间的连接
- 至于存储层,预计用户为应用程序数据提供高可用性的 PostgreSQL、Redis 集群,以及用于存储图像和图表的 PVC 或对象存储
Harbor 部署
Helm 编排
这块可以直接看官方文档,这边不做详细的操作:Deploying Harbor with High Availability via Helm
YAML 编排
因为我的 pvc 是 MinIO 提供的,直接用 helm 会有很多问题,这里只能通过 YAML 编排来慢慢调整,下面的 YAML 文件都是基于 helm template 生成后做的修改
创建 namespace
namespace 的名字大家可以自己定义,这个没有什么指定的
kubectl create ns registry
导入镜像
如果没有针对机器做规划的话,可以每个节点先都导入进去
ctr -n k8s.io image import harbor.v2.11.1.tar.gz
这个时候会有下面的报错
ctr: archive/tar: invalid tar header
通过 file 命令查看压缩包
file harbor.v2.11.1.tar.gz
可以看到是一个 gzip 压缩类型的,这个不是 ctr 支持的格式,ctr 要求的是无压缩类型的 tar 包
harbor.v2.11.1.tar.gz: gzip compressed data, was "harbor.v2.11.1.tar", last modified: Thu Aug 15 10:07:54 2024, from Unix, original size modulo 2^32 1811445248
这个时候需要解压,然后重新压缩
tar xvf harbor.v2.11.1.tar.gz
rm -f harbor.v2.11.1.tar.gz
tar cvf harbor.v2.11.1.tar.gz ./
可以用 file 命令检查一下,正常是返回类似下面这样的内容,然后重新 import 导入镜像就可以了
harbor.v2.11.1.tar.gz: POSIX tar archive (GNU)
部署 Redis
- 问了下 GPT,因为 MinIO 是对象存储,并不提供完全符合 POSIX 标准的文件系统功能(比如常规文件系统的权限管理),而 Redis 依赖于传统文件系统(如 ext4、xfs 等)来存储其数据文件(RDB、AOF)
- 由于是自己练习的,这里 Redis 就直接绑定节点,用 local pv 来处理持久化
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-harbor-redis-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-harbor-redis-0
namespace: registry
hostPath:
path: /approot/k8s_data/harbor-redis
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.22.125
---
# Source: harbor/templates/redis/service.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-redis
namespace: registry
labels:
app: harbor
spec:
ports:
- port: 6379
selector:
app: harbor
component: redis
---
# Source: harbor/templates/redis/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: harbor-redis
namespace: registry
labels:
app: harbor
component: redis
spec:
replicas: 1
serviceName: harbor-redis
selector:
matchLabels:
app: harbor
component: redis
template:
metadata:
labels:
app: harbor
component: redis
spec:
securityContext:
runAsUser: 999
fsGroup: 999
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
initContainers:
- name: init-dir
image: goharbor/redis-photon:v2.11.1
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "chown -R 999:999 /var/lib/redis"]
securityContext:
runAsUser: 0
volumeMounts:
- name: data
mountPath: /var/lib/redis
containers:
- name: redis
image: goharbor/redis-photon:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 1
periodSeconds: 10
volumeMounts:
- name: data
mountPath: /var/lib/redis
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "1Gi"
部署 PostgreSQL
PostgreSQL 和 Redis 一样,数据持久化目录涉及权限问题,这里也先绑定节点
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-data-harbor-database-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: database-data-harbor-database-0
namespace: registry
hostPath:
path: /approot/k8s_data/harbor-database
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 192.168.22.124
---
# Source: harbor/templates/database/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
type: Opaque
data:
POSTGRES_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/database/database-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
spec:
ports:
- port: 5432
selector:
app: harbor
component: database
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: harbor-database
namespace: registry
labels:
app: harbor
component: database
spec:
replicas: 1
serviceName: harbor-database
selector:
matchLabels:
app: harbor
component: database
template:
metadata:
labels:
app: harbor
component: database
spec:
securityContext:
runAsUser: 999
fsGroup: 999
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
initContainers:
# with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume
# this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph
# use this init container to correct the permission
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command
- name: "data-permissions-ensurer"
image: goharbor/harbor-db:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["sh", "-c", "mkdir -p /var/lib/postgresql/data/pgdata && chmod -R 700 /var/lib/postgresql/data/pgdata && chown -R 999:999 /var/lib/postgresql/data"]
volumeMounts:
- name: database-data
mountPath: /var/lib/postgresql/data
subPath:
containers:
- name: database
image: goharbor/harbor-db:v2.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
livenessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 1
readinessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 1
envFrom:
- secretRef:
name: harbor-database
env:
# put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled
# more detail refer to https://github.com/goharbor/harbor-helm/issues/756
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
volumeMounts:
- name: database-data
mountPath: /var/lib/postgresql/data
subPath:
- name: shm-volume
mountPath: /dev/shm
volumes:
- name: shm-volume
emptyDir:
medium: Memory
sizeLimit: 512Mi
volumeClaimTemplates:
- metadata:
name: "database-data"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "1Gi"
部署 Harbor core
---
# Source: harbor/templates/core/core-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-core
labels:
app: "harbor"
type: Opaque
data:
secretKey: "bm90LWEtc2VjdXJlLWtleQ=="
secret: "OVQzOHVXZmtybTRTZFVUcQ=="
tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMENVM0Y1bXFHSktra0ZVd09aNzZoRGpJSjhEUlo4cm5hY1p4YXNpa2ZnK2RkZndlCk5SWUI3RTFRbzZDQlA4bk9CdWN0L3JiRVNVank4S1JlaWRUTFZwRG41OFk4dW9iN3BqSTVLbENMdVE2NEN0QW0KSVVEUzJWMTdqeTRFTmFRWjlNK0V4NjNpVnlNVGFpMjhYeHhZaytCMXp0ZEtxR2dhRXhHY2x5bG5Xb1FXU3RCUgpaeUlpblNUblJmdGFIeDlSQ0x6Z0lFUDlKamVSUEV4NmRLa0FFMHdaVmJSOWRUWWd6M21XSFdLc3BFbS8vWGVYClpzTTFYR3ZLQzNPNHo4Q3lWZ2FIMzdRMWNPM3NxSFNTMGwwaVVtZCs2azBoZGd0VGxUVU5mNVFMQWx6d25SV1oKbDR6d3I3N2x1bTh1c0h2emNra3NnalUzSU5XVHBnQmFGV25oOHdJREFRQUJBb0lCQUN1OVpsSmpURWRWcVpkYgpENE5NVVVDdjNmL2NtU1RDa3RhN2lPSHp2LzF0c3AwMG1mUjE1M21NMWNGTTNWeFdRQ0ZiTzJNbmJTQXBZRVFKCmhvUllYMUtWcU9ZZjFtc3NLbjNHV0JUNFVDUlhYMzJHT0QwTXJrSlhUcnZMNDc2UitaSmtlWGFzcDcrLzh6aUEKMi9Ed3QveDdVc1pnbjZPOEhKNmRPTmJiTUlqb2o1enVLSVdCampleFMybHVCdEFYNzduZXhmUzNpV3RrQS9USgpwcUpsNEJETFV1WEtralJKQzVEWnBBdHdtTVpQeGQrSTQzYnc3bVRpemppaEEzaXo4SkJKclBnTTE2b1V3SnQ2CmdMVVp5ZkZGTFNPbjEyZHhPZUxPNXZFNTJKV0JtVzRuRW5IOUxxb3hDWExic00xT04zN0ZwcmhzUXUvdko0M0wKaFJoMWFtRUNnWUVBMUh3Z241V1pjd1l1Vkw0TnRvMTlzVytncFZrUzloem1ObGtNcXR5bnFoUWE1ODh4Sk95dwpLUDdncEdZOGhIbnNmQ0NGbUpDV21CdUJRbUxrRWlNUU83eG8xQ1dMdzYydTRZeGdtenlJZDQxWmVDdVlpZHNFCnVPMlpjVEUrazc4Qy9CcmFUZHlqVk9SMnIvbk1IZGh6SzlhSkM0WlFOU2tudURwNUpVY091NDhDZ1lFQStzV1YKRElsTVBjNGtpaWNYeXdMbE50L1pQWGJjbUtRWWZJNFZsclpWQXpFRlFaQ3NGMzY0K2p1NEFlZUttdGJhMkZ4RApEMFdmaWxWOVpTczNCUDloc3ZpWk42eExaVjJZMEJHSlN6Mlp5L0x5aTRaNXk3MnB0aW83bGxyMWx4azZBTFVVCmkrQ3c4RmlQVElHMlozS1BSVko5b1B1S3JnUzUvSDFqNktzUTBWMENnWUI4NXlwV0pKNDdHeHNJL1Y4YVBEbnkKbjJlVFNyVDJyeTQwTEV4aDg2c3JNdjVOM1dGS0QwZk9FV1VEdm9VOGFsODA1L2tnSVg0a2s2WjcyNTJ0ZTZjRApObEY0dzBsUkVUdUhvZmozeDdHQWRUcHVoVkg1VnlHRGcwZDdYak1tcmxXVzFFSVhHdWQzODRSQkZWbURBY1ZSCnM1NkRnOFNLTzFMNTNJVnlBRDhNeVFLQmdERzBoQXlPRWp5VjVZdzBuM1N2eURzT040TUZVa2czRGx0eDFqbWYKUGs1NW91OFIrK3BVUmRuamlGOW9RNExaWDF0UFBrT0NxMUxDQ3k3SVdBbDNqU2ZxT29SY2REMU5SZ0xIMXd6QQowd0VuMElkelNpVG1IUU5zYjQ4bnpGSDh3QkJ2MC9pOXVwU0pHUzR5NzdLbGRGeHJNMWQ3UkV1bHlDK1Jzd0hsCkZscEpBb0dBZFRmSzZKTVJabWNBbGZCdTlUSnh2NjVKdFcrMDI5Wi94eDJ1NUprVzFPUnVwNVJlekJBM2NiQzQKRmc0Y1h6SHJ1S0sxWVhGcERyS0tGYTFMSzFGaFpjMkZCSnN5dGVLNHFQeVNLOTZVb1BlbHA1VzVBMDVZTjBBaQpLTDB5MzhNYWlYb1AyTWFvb2pSR29xWU9sTVVXRlU5RnJQSm9aSXNGKzRuUjVWZHNUUzQ9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg=="
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lRTGRpZ2xmZXNGaFhvVldOaTRKYkNwVEFOQmdrcWhraUc5dzBCQVFzRkFEQWEKTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3SGhjTk1qUXhNREUwTURFMU16QTRXaGNOTWpVeApNREUwTURFMU16QTRXakFhTVJnd0ZnWURWUVFERXc5b1lYSmliM0l0ZEc5clpXNHRZMkV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURRSlRjWG1hb1lrcVNRVlRBNW52cUVPTWdud05Gbnl1ZHAKeG5GcXlLUitENTExL0I0MUZnSHNUVkNqb0lFL3ljNEc1eTMrdHNSSlNQTHdwRjZKMU10V2tPZm54ank2aHZ1bQpNamtxVUl1NURyZ0swQ1loUU5MWlhYdVBMZ1ExcEJuMHo0VEhyZUpYSXhOcUxieGZIRmlUNEhYTzEwcW9hQm9UCkVaeVhLV2RhaEJaSzBGRm5JaUtkSk9kRisxb2ZIMUVJdk9BZ1EvMG1ONUU4VEhwMHFRQVRUQmxWdEgxMU5pRFAKZVpZZFlxeWtTYi85ZDVkbXd6VmNhOG9MYzdqUHdMSldCb2ZmdERWdzdleW9kSkxTWFNKU1ozN3FUU0YyQzFPVgpOUTEvbEFzQ1hQQ2RGWm1YalBDdnZ1VzZieTZ3ZS9OeVNTeUNOVGNnMVpPbUFGb1ZhZUh6QWdNQkFBR2pZVEJmCk1BNEdBMVVkRHdFQi93UUVBd0lDcERBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXcKRHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVR2dIa3dDQ1JZaGhTTEFGNDAvdkJTczVPbHd3dwpEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRzVJajhkZjZOcDY0NjZlYTJjcFlDZk9Vc3BRc21kMFJNd0dRTHZ2CndIak5kSDR5NWw2TjkwQUNQNHBEWmF4MUx4TEJqcHlNeGVCbzJ6TkF6NjFYQ0tJZ3RQU1RsK0NmTStqenRkVVQKVlRUNmw4emZRbVZCQk56WlVwMlhUTXdyVkowUHZML2FIbk94NGRDb0pxd2tobGNrY3JRM0ErN1haNmtGYnl1WQpBQ200cnppSHRJVWpyZ25veUVtUGFxWTJTYzJ3a3JRZklLVXRDVkl4WFdZbW51WHF6d0MwSVdqOXV5VGlTNzdECkg0V1NFdjh4ajVId3ZkK1JvaGtYaGQrbkM5WUhVQVRGSWpsclpxYkRUZU5vdjBQNG81d3N5RmJMOFN4YTFJNVoKRENqc2ZUeGx3NTJCYUI1V0YxZEJLYnBtUmRPWWprN2xEVHpqd0tRSmVkVHhnYW89Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
# Harbor admin 用户的密码
HARBOR_ADMIN_PASSWORD: "MXFAVzNlJFI="
POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ="
REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk"
CSRF_KEY: "b0wxSjdQZ2F1OFBxWWNLYXpkU2plUDNNemtzdG9nZ1U="
---
# Source: harbor/templates/core/core-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: harbor-core
labels:
app: "harbor"
data:
app.conf: |+
appname = Harbor
runmode = prod
enablegzip = true
[prod]
httpport = 8080
PORT: "8080"
DATABASE_TYPE: "postgresql"
POSTGRESQL_HOST: "harbor-database"
POSTGRESQL_PORT: "5432"
POSTGRESQL_USERNAME: "postgres"
POSTGRESQL_DATABASE: "registry"
POSTGRESQL_SSLMODE: "disable"
POSTGRESQL_MAX_IDLE_CONNS: "100"
POSTGRESQL_MAX_OPEN_CONNS: "900"
EXT_ENDPOINT: "http://harbor.devops.icu"
CORE_URL: "http://harbor-core:80"
JOBSERVICE_URL: "http://harbor-jobservice"
REGISTRY_URL: "http://harbor-registry:5000"
TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
CORE_LOCAL_URL: "http://127.0.0.1:8080"
WITH_TRIVY: "true"
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
REGISTRY_STORAGE_PROVIDER_NAME: "s3"
LOG_LEVEL: "info"
CONFIG_PATH: "/etc/core/app.conf"
CHART_CACHE_DRIVER: "redis"
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
_REDIS_URL_REG: "redis://harbor-redis:6379/2?idle_timeout_seconds=30"
PORTAL_URL: "http://harbor-portal"
REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
HTTP_PROXY: ""
HTTPS_PROXY: ""
NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory"
METRIC_ENABLE: "true"
METRIC_PATH: "/metrics"
METRIC_PORT: "8001"
METRIC_NAMESPACE: harbor
METRIC_SUBSYSTEM: core
QUOTA_UPDATE_PROVIDER: "db"
---
# Source: harbor/templates/core/core-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-core
labels:
app: "harbor"
spec:
ports:
- name: http-web
port: 80
targetPort: 8080
- name: http-metrics
port: 8001
selector:
app: "harbor"
component: core
---
# Source: harbor/templates/core/core-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: harbor-core
labels:
app: "harbor"
component: core
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: "harbor"
component: core
template:
metadata:
labels:
app: "harbor"
component: core
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
automountServiceAccountToken: false
terminationGracePeriodSeconds: 120
containers:
- name: core
image: goharbor/harbor-core:v2.11.1
imagePullPolicy: IfNotPresent
startupProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 360
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
envFrom:
- configMapRef:
name: "harbor-core"
- secretRef:
name: "harbor-core"
env:
- name: CORE_SECRET
valueFrom:
secretKeyRef:
name: harbor