【Docker&Containerd】使用docker pull 和crictl pull命令拉取镜像导入至本地仓库

时间:2024-07-14 10:34:42

文章目录

    • 场景需求说明
    • 登录亚马逊云控制台,创建EC2实例
    • SSH连接EC2实例
    • 安装Docker
    • 拉取所需要的镜像
    • 使用SecureFX传输工具拉取镜像至本地
    • 上传镜像至本地服务器中
    • 使用crictl命令导入镜像

场景需求说明

因docker镜像下载受阻。

登录亚马逊云控制台,创建EC2实例

image-20240613112847056

SSH连接EC2实例

C:\Users\xyb>ssh -i <密钥.pem> ec2-user@<公网IP地址/弹性IP地址>

image-20240613112716078

安装Docker

[root@ip-10-0-10-183 ~]# yum install -y docker


[root@ip-10-0-10-183 ~]# systemctl daemon-reload
[root@ip-10-0-10-183 ~]# systemctl start docker && systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.


[root@ip-10-0-10-183 ~]# docker info
Client:
 Version:    25.0.3
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.0.0+unknown
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 25.0.3
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 64b8a811b07ba6288238eefc14d898ee0b5b99ba
 runc version: 4bccb38cc9cf198d52bebf2b3a90cd14e7af8c06
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.92-99.174.amzn2023.x86_64
 Operating System: Amazon Linux 2023.4.20240611
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.813GiB
 Name: ip-10-0-10-183.ec2.internal
 ID: 437e11f3-e1e3-4507-bc6d-2de96b64a77d
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

拉取所需要的镜像

[root@ip-10-0-10-183 ~]# docker pull docker.io/calico/cni:v3.25.0
v3.25.0: Pulling from calico/cni
bc84ed7b6a65: Pull complete
ae5822c70dac: Pull complete
5e4c3414e9ca: Pull complete
8833c0c1f858: Pull complete
8729f736e48f: Pull complete
79eb57bec78a: Pull complete
84d025afc533: Pull complete
df79b6dbf625: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
Status: Downloaded newer image for calico/cni:v3.25.0
docker.io/calico/cni:v3.25.0
[root@ip-10-0-10-183 ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
calico/cni   v3.25.0   d70a5947d57e   17 months ago   198MB

使用SecureFX传输工具拉取镜像至本地

image-20240613112508446

上传镜像至本地服务器中

image-20240613113100920

使用crictl命令导入镜像

docker pull docker.io/calico/kube-controllers:v3.26.1
#docker拉取镜像
 
docker save -o kube-controllers.tar docker.io/calico/kube-controllers:v3.26.1
#docker导出镜像
 
ctr -n k8s.io  image import cni.tar  docker.io/calico/cni:v3.25.0

ctr -n k8s.io  image import cni.tar  registry.aliyuncs.com/google_containers/cni:v3.25.0
#ctr导入镜像
 
crictl images
#查看镜像是否导入成功
 
kubectl get pod -A -o wide	
#查看pod是否运行
 
sudo scp kube-controllers.tar k8s@node1:/home/k8s
#把docker导出的包传到其它节点


#使用ctr命令指定命名空间导入镜像
ctr -n=k8s.io image import cni.tar

#查看镜像,可以看到可以查询到了
crictl images

[root@node01 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-658d97c59c-fzmmj   1/1     Running   0          56m
calico-node-5fdwc                          1/1     Running   0          23m
calico-node-k5smq                          1/1     Running   0          56m
coredns-66f779496c-5qxnw                   1/1     Running   0          88m
coredns-66f779496c-tg9vb                   1/1     Running   0          88m
etcd-master01                              1/1     Running   0          89m
kube-apiserver-master01                    1/1     Running   0          89m
kube-controller-manager-master01           1/1     Running   0          89m
kube-proxy-thm89                           1/1     Running   0          89m
kube-proxy-tpcdh                           1/1     Running   0          23m
kube-scheduler-master01                    1/1     Running   0          89m