二進(jìn)制部署k8s

二進(jìn)制部署k8s

一、安裝要求

在開始之前,部署Kubernetes集群機(jī)器需要滿足以下幾個條件:

??一臺或多臺機(jī)器,操作系統(tǒng) CentOS7.x-86_x64

1、硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多

2、可以訪問外網(wǎng),需要拉取鏡像,如果服務(wù)器不能上網(wǎng),需要提前下載鏡像并導(dǎo)入節(jié)點(diǎn)

3、禁止swap分區(qū)

二、單Master服務(wù)器規(guī)劃

k8s-master????192.168.31.71????kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-node1????192.168.31.72????kubelet,kube-proxy,docker etcd

k8s-node2????192.168.31.73????kubelet,kube-proxy,docker,etcd

高可用集群規(guī)劃(在單點(diǎn)上擴(kuò)展的)

192.168.10.136? master1

192.168.10.137? node

192.168.10.138? node

192.168.10.139? master2

192.168.10.140? load

192.168.10.141? load

192.168.10.142? vip

三、操作系統(tǒng)初始化

#關(guān)閉防火墻

systemctl stop firewalld

systemctl disable firewalld

# 關(guān)閉selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config??# 永久

setenforce 0??# 臨時

#關(guān)閉swap

swapoff -a??# 臨時

sed -ri 's/.*swap.*/#&/' /etc/fstab????# 永久

#根據(jù)規(guī)劃設(shè)置主機(jī)名

hostnamectl set-hostname <hostname>

#在master添加hosts

cat >> /etc/hosts << EOF

192.168.10.136 master1

192.168.10.137 node1

192.168.10.138 node2

EOF

#將橋接的IPv4流量傳遞到iptables的鏈

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system??# 生效

#時間同步

yum install ntpdate -y

ntpdate?time.windows.com

四、部署Etcd集群(三臺機(jī)器都要部署,找任意一臺服務(wù)器操作,這里用Master節(jié)點(diǎn))

1、準(zhǔn)備cfssl證書生成工具,cfssl是一個開源的證書管理工具,使用json文件生成證書,相比openssl更方便使用。

wget?https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget?https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget?https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、生成Etcd證書

自簽證書頒發(fā)機(jī)構(gòu)(CA)

①創(chuàng)建工作目錄:

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd

②自簽CA:

vi? ca-config.json

{

??"signing": {

????"default": {

??????"expiry": "87600h"

????},

????"profiles": {

??????"www": {

?????????"expiry": "87600h",

?????????"usages": [

????????????"signing",

????????????"key encipherment",

????????????"server auth",

????????????"client auth"

????????]

??????}

????}

??}

}

vi? ca-csr.json

{

????"CN": "etcd CA",

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "Beijing",

????????????"ST": "Beijing"

????????}

????]

}

③生成證書:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem??看下數(shù)字證書ca-key.pem和ca.pem是否生成

3、使用自簽CA簽發(fā)Etcd HTTPS證書(上述文件hosts字段中IP為所有etcd節(jié)點(diǎn)的集群內(nèi)部通信IP,為了方便后期擴(kuò)容可以多寫幾個預(yù)留的IP)

vi? server-csr.json

{

????"CN": "etcd",

????"hosts": [

????"192.168.10.136",

????"192.168.10.137",

????"192.168.10.138"

????],

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "BeiJing",

????????????"ST": "BeiJing"

????????}

????]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem??看下數(shù)字證書server-key.pem和server.pem是否生成

4、下載etcd二進(jìn)制包,并部署(以下在master上操作,為簡化操作,待會將master上生成的所有文件拷貝etcd其他節(jié)點(diǎn))

地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

①、創(chuàng)建工作目錄并解壓二進(jìn)制包

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar xf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

②、創(chuàng)建etcd配置文件

vi? /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.10.136:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.10.136:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.136:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.136:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.10.136:2380,etcd-2=https://192.168.10.137:2380,etcd-3=https://192.168.10.138:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

注釋:

ETCD_NAME:節(jié)點(diǎn)名稱,集群中唯一

ETCD_DATA_DIR:數(shù)據(jù)目錄

ETCD_LISTEN_PEER_URLS:集群通信監(jiān)聽地址

ETCD_LISTEN_CLIENT_URLS:客戶端訪問監(jiān)聽地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址

ETCD_INITIAL_CLUSTER:集群節(jié)點(diǎn)地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的當(dāng)前狀態(tài),new是新集群,existing表示加入已有集群

③、systemd管理etcd

vi? /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd \

--cert-file=/opt/etcd/ssl/server.pem \

--key-file=/opt/etcd/ssl/server-key.pem \

--peer-cert-file=/opt/etcd/ssl/server.pem \

--peer-key-file=/opt/etcd/ssl/server-key.pem \

--trusted-ca-file=/opt/etcd/ssl/ca.pem \

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \

--logger=zap

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

④、把剛才生成的證書拷貝到配置文件中的路徑

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

⑤、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start etcd && systemctl enable etcd

⑥、將master上所有生成的文件拷貝到node1和node2上

scp -r /opt/etcd root@192.168.10.137:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.10.137:/usr/lib/systemd/system/

scp -r /opt/etcd root@192.168.10.138:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.10.138:/usr/lib/systemd/system/

⑦、然后在node1和node2上分別修改etcd.conf配置文件中的節(jié)點(diǎn)名稱和當(dāng)前服務(wù)器IP

vi /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME="etcd-1"??#修改此處,節(jié)點(diǎn)2改為etcd-2,節(jié)點(diǎn)3改為etcd-3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"????#修改此處為當(dāng)前服務(wù)器IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"??#修改此處為當(dāng)前服務(wù)器IP

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"??#修改此處為當(dāng)前服務(wù)器IP

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"????????#修改此處為當(dāng)前服務(wù)器IP

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

最后啟動etcd并設(shè)置開機(jī)啟動,同上

⑧、查看集群狀態(tài)(成功會返回healthy:successfully狀態(tài))

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379" endpoint health

五、安裝Docker(以下在所有節(jié)點(diǎn)操作,這里采用二進(jìn)制安裝,用yum安裝也一樣)

下載地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

1、解壓二進(jìn)制包

tar xf docker-19.03.9.tgz

mv docker/* /usr/bin

2、systemd管理docker

vi? /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

[Service]

Type=notify

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

3、創(chuàng)建配置文件

mkdir /etc/docker

vi? /etc/docker/daemon.json

{

??"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

4、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start docker && systemctl enable docker

六、部署Master

1、生成kube-apiserver證書

自簽證書頒發(fā)機(jī)構(gòu)(CA)

cd TLS/k8s

vi ca-config.json

{

??"signing": {

????"default": {

??????"expiry": "87600h"

????},

????"profiles": {

??????"kubernetes": {

?????????"expiry": "87600h",

?????????"usages": [

????????????"signing",

????????????"key encipherment",

????????????"server auth",

????????????"client auth"

????????]

??????}

????}

??}

}

vi? ca-csr.json

{

????"CN": "kubernetes",

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "Beijing",

????????????"ST": "Beijing",

????????????"O": "k8s",

????????????"OU": "System"

????????}

????]

}

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem??看下數(shù)字證書ca-key.pem和ca.pem是否生成

2、使用自簽CA簽發(fā)kube-apiserver HTTPS證書

創(chuàng)建證書申請文件:

vi? server-csr.json

{

????"CN": "kubernetes",

????"hosts": [

??????"10.0.0.1",

??????"127.0.0.1",

??????"192.168.10.136",

??????"192.168.10.137",

??????"192.168.10.138",

??????"192.168.10.139",

??????"192.168.10.140",

??????"192.168.10.141",

??????"192.168.10.142",

??????"kubernetes",

??????"kubernetes.default",

??????"kubernetes.default.svc",

??????"kubernetes.default.svc.cluster",

??????"kubernetes.default.svc.cluster.local"

????],

????"key": {

????????"algo": "rsa",

????????"size": 2048

????},

????"names": [

????????{

????????????"C": "CN",

????????????"L": "BeiJing",

????????????"ST": "BeiJing",

????????????"O": "k8s",

????????????"OU": "System"

????????}

????]

}

注:上述文件hosts字段中IP為所有Master/LB/VIP IP,一個都不能少!為了方便后期擴(kuò)容可以多寫幾個預(yù)留的IP。

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem??看下數(shù)字證書server-key.pem和server.pem是否生成

3、下載可以部署master和node的二進(jìn)制文件(打開鏈接你會發(fā)現(xiàn)里面有很多包,下載一個server包就夠了,包含了Master和Worker二進(jìn)制文件。)

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

①、解壓二進(jìn)制包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

tar xf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

cp kubectl /usr/bin/

4、部署kube-apiserver

①、創(chuàng)建配置文件(注:上面兩個\ \第一個是轉(zhuǎn)義符,第二個是換行符,使用轉(zhuǎn)義符是為了使用EOF保留換行符。)

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF

KUBE_APISERVER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--etcd-servers=https://192.168.10.136:2379,https://192.168.10.137:2379,https://192.168.10.138:2379?\\

--bind-address=192.168.10.136 \\

--secure-port=6443 \\

--advertise-address=192.168.10.136 \\

--allow-privileged=true \\

--service-cluster-ip-range=10.0.0.0/24 \\

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\

--authorization-mode=RBAC,Node \\

--enable-bootstrap-token-auth=true \\

--token-auth-file=/opt/kubernetes/cfg/token.csv \\

--service-node-port-range=30000-32767 \\

--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\

--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\

--tls-cert-file=/opt/kubernetes/ssl/server.pem??\\

--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\

--client-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--etcd-cafile=/opt/etcd/ssl/ca.pem \\

--etcd-certfile=/opt/etcd/ssl/server.pem \\

--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\

--audit-log-maxage=30 \\

--audit-log-maxbackup=3 \\

--audit-log-maxsize=100 \\

--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

EOF

注釋:

logtostderr:啟用日志

-v:日志等級

log-dir:日志目錄

etcd-servers:etcd集群地址

bind-address:監(jiān)聽地址

secure-port:https安全端口

advertise-address:集群通告地址

allow-privileged:啟用授權(quán)

service-cluster-ip-range:Service虛擬IP地址段

enable-admission-plugins:準(zhǔn)入控制模塊

authorization-mode:認(rèn)證授權(quán),啟用RBAC授權(quán)和節(jié)點(diǎn)自管理

enable-bootstrap-token-auth:啟用TLS bootstrap機(jī)制

token-auth-file:bootstrap token文件

service-node-port-range:Service nodeport類型默認(rèn)分配端口范圍

kubelet-client-xxx:apiserver訪問kubelet客戶端證書

tls-xxx-file:apiserver https證書

etcd-xxxfile:連接Etcd集群證書

audit-log-xxx:審計日志、

②、把剛才生成的證書拷貝到配置文件中的路徑:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

③、啟用 TLS Bootstrapping 機(jī)制

TLS Bootstraping:Master apiserver啟用TLS認(rèn)證后,Node節(jié)點(diǎn)kubelet和kube-proxy要與kube-apiserver進(jìn)行通信,必須使用CA簽發(fā)的有效證書才可以,當(dāng)Node節(jié)點(diǎn)很多時,這種客戶端證書頒發(fā)需要大量工作,同樣也會增加集群擴(kuò)展復(fù)雜度。為了簡化流程,Kubernetes引入了TLS bootstraping機(jī)制來自動頒發(fā)客戶端證書,kubelet會以一個低權(quán)限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態(tài)簽署。所以強(qiáng)烈建議在Node上使用這種方式,目前主要用于kubelet,kube-proxy還是由我們統(tǒng)一頒發(fā)一個證書。

④、創(chuàng)建上述配置文件中token文件(格式:token,用戶名,UID,用戶組)

cat > /opt/kubernetes/cfg/token.csv << EOF

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

EOF?

token也可自行生成替換(獲取隨機(jī)token)

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

⑤、systemd管理apiserver

vi? /usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

⑥、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver

⑦、授權(quán)kubelet-bootstrap用戶允許請求證書(kubectl命令需要配置config,要不然下面命令不可用)

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

⑧、配置kubectl,創(chuàng)建kubeconfig文件(注意命令執(zhí)行的位置,要在證書所在目錄下)

#生成管理員證書(在mater節(jié)點(diǎn)上操作)

cd? /root/TLS/k8s

vi? admin-csr.json

{

??"CN": "admin",

??"hosts": [],

??"key": {

????"algo": "rsa",

????"size": 2048

??},

??"names": [

????{

??????"C": "CN",

??????"L": "BeiJing",

??????"ST": "BeiJing",

??????"O": "system:masters",

??????"OU": "System"

????}

??]

}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#設(shè)置集群參數(shù)

kubectl config set-cluster kubernetes \

??--server=https://192.168.10.136:6443?\

??--certificate-authority=ca.pem \

??--embed-certs=true \

??--kubeconfig=config

#設(shè)置客戶端認(rèn)證參數(shù)

kubectl config set-credentials cluster-admin \

??--certificate-authority=ca.pem \

??--embed-certs=true \

??--client-key=admin-key.pem \

??--client-certificate=admin.pem \

??--kubeconfig=config

#設(shè)置上下文參數(shù)

kubectl config set-context default \

??--cluster=kubernetes \

??--user=cluster-admin \

??--kubeconfig=config

#設(shè)置默認(rèn)上下文

kubectl config use-context default --kubeconfig=config

#使命令生效

mv config /root/.kube/

5、部署kube-controller-manager

①、創(chuàng)建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--leader-elect=true \\

--master=127.0.0.1:8080 \\

--bind-address=127.0.0.1 \\

--allocate-node-cidrs=true \\

--cluster-cidr=10.244.0.0/16 \\

--service-cluster-ip-range=10.0.0.0/24 \\

--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\

--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem??\\

--root-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--experimental-cluster-signing-duration=87600h0m0s"

EOF

注釋:

master:通過本地非安全本地端口8080連接apiserver。

leader-elect:當(dāng)該組件啟動多個時,自動選舉(HA)

cluster-signing-cert-file/--cluster-signing-key-file:自動為kubelet頒發(fā)證書的CA,與apiserver保持一致

②、systemd管理controller-manager

vi? /usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

systemctl status kube-controller-manager

6、部署kube-scheduler

①、創(chuàng)建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF

KUBE_SCHEDULER_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--leader-elect \

--master=127.0.0.1:8080 \

--bind-address=127.0.0.1"

EOF

注釋:

master:通過本地非安全本地端口8080連接apiserver。

leader-elect:當(dāng)該組件啟動多個時,自動選舉(HA)

②、systemd管理kube-scheduler

vi? /usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

③、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

systemctl status kube-scheduler

④、查看集群狀態(tài)

所有組件都已經(jīng)啟動成功,通過kubectl工具查看當(dāng)前集群組件狀態(tài):

kubectl get cs

七、部署node節(jié)點(diǎn)

1、創(chuàng)建目錄&從master拷貝命令到node上

①、在所有node節(jié)點(diǎn)創(chuàng)建工作目錄

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

②、從master節(jié)點(diǎn)拷貝:

scp /opt/kubernetes/ssl/ca.pem 192.168.10.137:/opt/kubernetes/ssl/

cd /root/kubernetes/server/bin

scp kubelet kube-proxy 192.168.10.137:/opt/kubernetes/bin

scp kubelet kube-proxy 192.168.10.138:/opt/kubernetes/bin

scp kubectl?192.168.10.137:/usr/bin

scp kubectl?192.168.10.138:/usr/bin

2、部署kubelet

①、創(chuàng)建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF

KUBELET_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--hostname-override=node1 \\???#此處需要修改為節(jié)點(diǎn)主機(jī)名

--network-plugin=cni \\

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\

--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\

--config=/opt/kubernetes/cfg/kubelet-config.yml \\

--cert-dir=/opt/kubernetes/ssl \\

--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

EOF

注釋:

hostname-override:顯示名稱,集群中唯一

network-plugin:啟用CNI

kubeconfig:空路徑,會自動生成,后面用于連接apiserver

bootstrap-kubeconfig:首次啟動向apiserver申請證書

config:配置參數(shù)文件

cert-dir:kubelet證書生成目錄

pod-infra-container-image:管理Pod網(wǎng)絡(luò)容器的鏡像

②、配置參數(shù)文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 0.0.0.0

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- 10.0.0.2

clusterDomain: cluster.local

failSwapOn: false

authentication:

??anonymous:

????enabled: false

??webhook:

????cacheTTL: 2m0s

????enabled: true

??x509:

????clientCAFile: /opt/kubernetes/ssl/ca.pem

authorization:

??mode: Webhook

??webhook:

????cacheAuthorizedTTL: 5m0s

????cacheUnauthorizedTTL: 30s

evictionHard:

??imagefs.available: 15%

??memory.available: 100Mi

??nodefs.available: 10%

??nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF

③、生成bootstrap.kubeconfig文件(下面命令在master節(jié)點(diǎn)的/opt/kubernetes/cfg目錄下操作)

設(shè)置變量:

KUBE_APISERVER="https://192.168.10.136:6443"??#apiserver IP:PORT

TOKEN="c47ffb939f5ca36231d9e3121a252940"??#與token.csv里保持一致

生成kubelet bootstrap kubeconfig配置文件

kubectl config set-cluster kubernetes \

??--certificate-authority=/opt/kubernetes/ssl/ca.pem \

??--embed-certs=true \

??--server=${KUBE_APISERVER} \

??--kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials "kubelet-bootstrap" \

??--token=${TOKEN} \

??--kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \

??--cluster=kubernetes \

??--user="kubelet-bootstrap" \

??--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

scp?/opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.137:/opt/kubernetes/cfg/

scp?/opt/kubernetes/cfg/bootstrap.kubeconfig 192.168.10.138:/opt/kubernetes/cfg/

④、systemd管理kubelet

vi? /usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

ps -ef | grep kubelet

netstat -antp | grep 10250?

⑥、批準(zhǔn)kubelet證書申請并加入集群(下面命令在master節(jié)點(diǎn)操作)

#查看kubelet證書請求

kubectl get csr(命令可以查看到哪些節(jié)點(diǎn)申請了證書請求)

#批準(zhǔn)申請

kubectl certificate approve 后面加上上條命令返回的節(jié)點(diǎn)名稱

#查看節(jié)點(diǎn)

kubectl get node(此時節(jié)點(diǎn)都是notready的狀態(tài),因為還沒有部署cni網(wǎng)絡(luò)插件)

3、部署kube-proxy

①、創(chuàng)建配置文件

vi? /opt/kubernetes/cfg/kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=false \

--v=2 \

--log-dir=/opt/kubernetes/logs \

--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

②、配置參數(shù)文件

vi? /opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

metricsBindAddress: 0.0.0.0:10249

clientConnection:

??kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig

hostnameOverride: k8s-master? ?#此處修改為自己的主機(jī)名

clusterCIDR: 10.0.0.0/24

③、生成kube-proxy.kubeconfig證書文件(下面命令在master節(jié)點(diǎn)操作)

#切換工作目錄

cd /root/TLS/k8s

#創(chuàng)建證書請求文件

vi? kube-proxy-csr.json

{

??"CN": "system:kube-proxy",

??"hosts": [],

??"key": {

????"algo": "rsa",

????"size": 2048

??},

??"names": [

????{

??????"C": "CN",

??????"L": "BeiJing",

??????"ST": "BeiJing",

??????"O": "k8s",

??????"OU": "System"

????}

??]

}

#生成證書

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem??看下數(shù)字證書kube-proxy-key.pem和kube-proxy.pem是否生成

#拷貝證書到node節(jié)點(diǎn)

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.137:/opt/kubernetes/ssl

scp kube-proxy-key.pem kube-proxy.pem 192.168.10.138:/opt/kubernetes/ssl

④、生成kubeconfig文件:

KUBE_APISERVER="https://192.168.10.136:6443"

kubectl config set-cluster kubernetes \

??--certificate-authority=/opt/kubernetes/ssl/ca.pem \

??--embed-certs=true \

??--server=${KUBE_APISERVER} \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

??--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \

??--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \

??--embed-certs=true \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

??--cluster=kubernetes \

??--user=kube-proxy \

??--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

⑤、systemd管理kube-proxy

vi? /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

⑤、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

ps -ef | grep kube-proxy

netstat -antp | grep 10249

4、部署cni網(wǎng)絡(luò)插件(在node節(jié)點(diǎn)上操作)

①、先準(zhǔn)備好CNI二進(jìn)制文件(這個可以下載到)

下載地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

②、解壓二進(jìn)制包并移動到默認(rèn)工作目錄

mkdir /opt/cni/bin -p

tar xf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

③、部署CNI網(wǎng)絡(luò)(下面那個鏈接訪問不到,我自行下載了yaml文件,默認(rèn)鏡像地址無法訪問,修改為docker hub鏡像倉庫)

wget?https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl apply -f kube-flannel.yml

kubectl get node(此時node顯示為ready狀態(tài))

④、授權(quán)apiserver訪問kubelet

vi? apiserver-to-kubelet-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

??annotations:

????rbac.authorization.kubernetes.io/autoupdate: "true"

??labels:

????kubernetes.io/bootstrapping: rbac-defaults

??name: system:kube-apiserver-to-kubelet

rules:

??- apiGroups:

??????- ""

????resources:

??????- nodes/proxy

??????- nodes/stats

??????- nodes/log

??????- nodes/spec

??????- nodes/metrics

??????- pods/log

????verbs:

??????- "*"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

??name: system:kube-apiserver

??namespace: ""

roleRef:

??apiGroup: rbac.authorization.k8s.io

??kind: ClusterRole

??name: system:kube-apiserver-to-kubelet

subjects:

??- apiGroup: rbac.authorization.k8s.io

????kind: User

????name: kubernetes

kubectl apply -f apiserver-to-kubelet-rbac.yaml

八、部署Dashboard和CoreDNS

①、部署Dashboard(下載不了,已自行下載)

wget?https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

②、授權(quán)訪問dashboard

cat??dashboard-adminuser.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

??name: admin-user

??namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

??name: admin-user

roleRef:

??apiGroup: rbac.authorization.k8s.io

??kind: ClusterRole

??name: cluster-admin

subjects:

- kind: ServiceAccount

??name: admin-user

??namespace: kubernetes-dashboard

kubectl apply -f dashboard-adminuser.yaml

③、獲取可以訪問dashboard頁面的token(復(fù)制token即可)

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret |grep admin-user|awk '{print $1}')

④、訪問地址:https://NodeIP:30001

⑤、部署CoreDNS,CoreDNS用于集群內(nèi)部Service名稱解析(自行下載yaml文件)

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system

NAME??????????????????????????READY???STATUS????RESTARTS???AGE

coredns-5ffbfd976d-j6shb??????1/1?????Running???0??????????32s

⑥、DNS解析測試

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

/ # nslookup kubernetes

Server:????10.0.0.2

Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:??????kubernetes

Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

可以看到解析沒問題

九、新增加Node節(jié)點(diǎn)

1、拷貝已部署好的Node相關(guān)文件到新節(jié)點(diǎn),在node1節(jié)點(diǎn)將涉及文件拷貝到新節(jié)點(diǎn)node2,

scp -r /opt/kubernetes root@192.168.10.138:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.10.138:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.10.138:/opt/

2、刪除kubelet證書和kubeconfig文件(注:這幾個文件是證書申請審批后自動生成的,每個Node不同,必須刪除重新生成)

rm /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

3、修改主機(jī)名

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=node2

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: node2

4、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

systemctl start kube-proxy

systemctl enable kube-proxy

5、在Master上批準(zhǔn)新Node kubelet證書申請

kubectl get csr

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

6、查看Node狀態(tài)

kubectl get node

至此單master集群部署完成!!

十、高可用架構(gòu)(擴(kuò)容多Master架構(gòu),多部署一個master,兩個nginx)

集群ip規(guī)劃

192.168.10.136,master

192.168.10.137,node

192.168.10.138,node

192.168.10.139,master2

192.168.10.140,nginx-load

192.168.10.141,nginx-load

192.168.10.142,vip

1、安裝Docker(master2操作)

同上,不再贅述。

2、部署master2(192.168.10.139)

master2與已部署的Master1所有操作一致。所以我們只需將Master1所有K8s文件拷貝過來,再修改下服務(wù)器IP和主機(jī)名啟動服務(wù)即可。

3、創(chuàng)建etcd證書目錄

在Master2創(chuàng)建etcd證書目錄:

mkdir -p /opt/etcd/ssl

4、拷貝文件(Master1操作)

拷貝master1上所有K8s文件和etcd證書到master2

scp -r /opt/kubernetes root@192.168.10.139:/opt

scp -r /opt/cni/ root@192.168.10.139:/opt

scp -r /opt/etcd/ssl root@192.168.10.139:/opt/etcd

scp /usr/lib/systemd/system/kube* root@192.168.10.139:/usr/lib/systemd/system

scp /usr/bin/kubectl root@192.168.10.139:/usr/bin

5、刪除證書文件

刪除kubelet證書和kubeconfig文件

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

6、修改配置文件IP和主機(jī)名

修改apiserver、kubelet和kube-proxy配置文件為本地IP

vi /opt/kubernetes/cfg/kube-apiserver.conf

--bind-address=192.168.10.139 \??#修改為本機(jī)IP

--advertise-address=192.168.10.139 \??#修改為本機(jī)IP

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=master2??#修改為本機(jī)主機(jī)名

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: master2??#修改為本機(jī)主機(jī)名

7、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload

systemctl start kube-apiserver

systemctl start kube-controller-manager

systemctl start kube-scheduler

systemctl start kubelet

systemctl start kube-proxy

systemctl enable kube-apiserver

systemctl enable kube-controller-manager

systemctl enable kube-scheduler

systemctl enable kubelet

systemctl enable kube-proxy

十一、部署Nginx負(fù)載均衡器

1、原理

Nginx是一個主流Web服務(wù)和反向代理服務(wù)器,這里用四層實(shí)現(xiàn)對apiserver實(shí)現(xiàn)負(fù)載均衡。

Keepalived是一個主流高可用軟件,基于VIP綁定實(shí)現(xiàn)服務(wù)器雙機(jī)熱備,Keepalived主要根據(jù)Nginx運(yùn)行狀態(tài)判斷是否需要故障轉(zhuǎn)移(偏移VIP),例如當(dāng)Nginx主節(jié)點(diǎn)掛掉,VIP會自動綁定在Nginx備節(jié)點(diǎn),從而保證VIP一直可用,實(shí)現(xiàn)Nginx高可用。

2、安裝軟件包(主/備都要操作)

yum install epel-release -y

yum install nginx keepalived -y

3、Nginx配置文件(主/備一樣)

cat > /etc/nginx/nginx.conf << EOF

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log;

pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {

????worker_connections 1024;

}

stream {

????log_format??main??'$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

????access_log??/var/log/nginx/k8s-access.log??main;

????upstream k8s-apiserver {

???????server 192.168.10.136:6443;???

???????server 192.168.10.139:6443;

????}

????server {

???????listen 6443;

???????proxy_pass k8s-apiserver;

????}

}

http {

????log_format??main??'$remote_addr - $remote_user [$time_local] "$request" '

??????????????????????'$status $body_bytes_sent "$http_referer" '

??????????????????????'"$http_user_agent" "$http_x_forwarded_for"';

????access_log??/var/log/nginx/access.log??main;

????sendfile????????????on;

????tcp_nopush??????????on;

????tcp_nodelay?????????on;

????keepalive_timeout???65;

????types_hash_max_size 2048;

????include?????????????/etc/nginx/mime.types;

????default_type????????application/octet-stream;

????server {

????????listen???????80 default_server;

????????server_name??_;

????????location / {

????????}

????}

}

EOF

4、keepalived配置文件(nginx-master)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

???notification_email {

?????acassen@firewall.loc

?????failover@firewall.loc

?????sysadmin@firewall.loc

???}

???notification_email_from Alexandre.Cassen@firewall.loc??

???smtp_server 127.0.0.1

???smtp_connect_timeout 30

???router_id NGINX_MASTER??#這個位置不同

}

vrrp_script check_nginx {

????script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

????state MASTER??#這個位置不同

????interface ens32??#修改為實(shí)際網(wǎng)卡名

????virtual_router_id 10??#修改為ip地址第三位

????priority 100???

????advert_int 1???#指定VRRP心跳包通告間隔時間,默認(rèn)1秒

????authentication {

????????auth_type PASS??????

????????auth_pass 1111

????}??

????virtual_ipaddress {

????????192.168.10.142/24

????}

????track_script {

????????check_nginx

????}

}

EOF

5、檢查nginx狀態(tài)腳本(master上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

????exit 1

else

????exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

6、keepalived配置文件(nginx-backup)

cat > /etc/keepalived/keepalived.conf << EOF

global_defs {

???notification_email {

?????acassen@firewall.loc

?????failover@firewall.loc

?????sysadmin@firewall.loc

???}

???notification_email_from Alexandre.Cassen@firewall.loc??

???smtp_server 127.0.0.1

???smtp_connect_timeout 30

???router_id NGINX_BACKUP??#這個位置和master不同

}

vrrp_script check_nginx {

????script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

????state BACKUP??#這個位置和master不同

????interface ens32

????virtual_router_id 10

????priority 90

????advert_int 1

????authentication {

????????auth_type PASS??????

????????auth_pass 1111

????}??

????virtual_ipaddress {

????????192.168.10.142/24

????}

????track_script {

????????check_nginx

????}

}

EOF

7、檢查nginx狀態(tài)腳本(backup上面做)

cat > /etc/keepalived/check_nginx.sh << EOF

#!/bin/bash

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then

????exit 1

else

????exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根據(jù)腳本返回狀態(tài)碼(0為工作正常,非0不正常)判斷是否故障轉(zhuǎn)移

8、啟動并設(shè)置開機(jī)啟動

systemctl daemon-reload

systemctl start nginx

systemctl start keepalived

systemctl enable nginx

systemctl enable keepalived

9、查看keepalived工作狀態(tài)

ip a??查看是否能在ens32網(wǎng)卡信息里看到vip的地址

10、Nginx+Keepalived高可用測試

關(guān)閉主節(jié)點(diǎn)nginx,測試VIP是否漂移到備節(jié)點(diǎn)服務(wù)器。

在nginx?master執(zhí)行pkill nginx

在nginx backup,ip a命令查看已成功綁定VIP

11、訪問負(fù)載均衡器測試

找K8s集群中任意一個節(jié)點(diǎn),使用curl查看K8s版本測試,使用VIP訪問

curl -k https://192.168.10.142:6443/version

{

??"major": "1",

??"minor": "18",

??"gitVersion": "v1.18.2",

??"gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",

??"gitTreeState": "clean",

??"buildDate": "2020-04-16T11:48:36Z",

??"goVersion": "go1.13.9",

??"compiler": "gc",

??"platform": "linux/amd64"

}

可以正確獲取到K8s版本信息,說明負(fù)載均衡器搭建正常

12、通過查看nginx日志也可以看到轉(zhuǎn)發(fā)apiserver ip

tail /var/log/nginx/k8s-access.log -f

13、修改所有node節(jié)點(diǎn)連接LB的VIP(在所有node節(jié)點(diǎn)上執(zhí)行)

sed -i 's#192.168.10.136:6443#192.168.10.142:6443#' /opt/kubernetes/cfg/*

systemctl restart kubelet

systemctl restart kube-proxy

14、檢查節(jié)點(diǎn)狀態(tài)

kubectl get node??所有節(jié)點(diǎn)狀態(tài)正常

一套完整的Kubernetes高可用集群就部署完成了!

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 227,837評論 6 531
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 98,196評論 3 414
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事?!?“怎么了?”我有些...
    開封第一講書人閱讀 175,688評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,654評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 71,456評論 6 406
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 54,955評論 1 321
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼。 笑死,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,044評論 3 440
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 42,195評論 0 287
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 48,725評論 1 333
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 40,608評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 42,802評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,318評論 5 358
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 44,048評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,422評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,673評論 1 281
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,424評論 3 390
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 47,762評論 2 372

推薦閱讀更多精彩內(nèi)容

  • 今天感恩節(jié)哎,感謝一直在我身邊的親朋好友。感恩相遇!感恩不離不棄。 中午開了第一次的黨會,身份的轉(zhuǎn)變要...
    迷月閃星情閱讀 10,590評論 0 11
  • 彩排完,天已黑
    劉凱書法閱讀 4,260評論 1 3
  • 表情是什么,我認(rèn)為表情就是表現(xiàn)出來的情緒。表情可以傳達(dá)很多信息。高興了當(dāng)然就笑了,難過就哭了。兩者是相互影響密不可...
    Persistenc_6aea閱讀 125,537評論 2 7