<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Navi</title>
  
  
  <link href="https://imszz.com/atom.xml" rel="self"/>
  
  <link href="https://imszz.com/"/>
  <updated>2025-05-29T02:30:25.000Z</updated>
  <id>https://imszz.com/</id>
  
  <author>
    <name>Navi</name>
    
  </author>
  
  <generator uri="https://hexo.io/">Hexo</generator>
  
  <entry>
    <title>K8s强制删除命名空间（namespace）</title>
    <link href="https://imszz.com/p/3ee01430/"/>
    <id>https://imszz.com/p/3ee01430/</id>
    <published>2024-05-29T02:00:25.000Z</published>
    <updated>2025-05-29T02:30:25.000Z</updated>
    
    <content type="html"><![CDATA[<h4 id="查看命名空间列表："><a href="#查看命名空间列表：" class="headerlink" title="查看命名空间列表："></a>查看命名空间列表：</h4><p>命名空间 <code>keda</code> 一直处于<code>Terminating</code> 状态</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">kubectl get ns</span><br><span class="line"></span><br><span class="line">NAME               STATUS        AGE</span><br><span class="line">default            Active        87d</span><br><span class="line">dev                Active        21d</span><br><span class="line">ingress-nginx      Active        126m</span><br><span class="line">keda               Terminating   126m</span><br><span class="line">kube-flannel       Active        87d</span><br><span class="line">kube-node-lease    Active        87d</span><br><span class="line">kube-public        Active        87d</span><br><span class="line">kube-system        Active        87d</span><br><span class="line">openfaas           Active        87d</span><br><span class="line">openfaas-fn        Active        87d</span><br><span class="line">openfunction       Active        28h</span><br><span class="line">tekton-pipelines   Active        126m</span><br></pre></td></tr></table></figure><h4 id="解决办法"><a href="#解决办法" class="headerlink" title="解决办法"></a>解决办法</h4><p>将无法删除命名空间的json格式配置文件导出：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get ns keda -o json &gt; keda.json</span><br></pre></td></tr></table></figure><p>编辑<code>json</code>配置文件的<code>&quot;spec&quot;</code>配置，将<code>&quot;finalizers&quot;</code>清空：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">&quot;spec&quot;: &#123;</span><br><span class="line">    &quot;finalizers&quot;: [</span><br><span class="line">    ]</span><br><span class="line">&#125;,</span><br></pre></td></tr></table></figure><p>根据修改后的<code>json</code>配置<code>replace</code>掉原来的ns配置：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl replace --raw &quot;/api/v1/namespaces/keda/finalize&quot; -f ./keda.json</span><br></pre></td></tr></table></figure><h4 id="再度查看："><a href="#再度查看：" class="headerlink" title="再度查看："></a>再度查看：</h4><p>已成功删除</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">kubectl get ns</span><br><span class="line"></span><br><span class="line">NAME               STATUS   AGE</span><br><span class="line">default            Active   87d</span><br><span class="line">dev                Active   21d</span><br><span class="line">ingress-nginx      Active   144m</span><br><span class="line">kube-flannel       Active   87d</span><br><span class="line">kube-node-lease    Active   87d</span><br><span class="line">kube-public        Active   87d</span><br><span class="line">kube-system        Active   87d</span><br><span class="line">openfaas           Active   87d</span><br><span class="line">openfaas-fn        Active   87d</span><br><span class="line">openfunction       Active   28h</span><br><span class="line">tekton-pipelines   Active   144m</span><br><span class="line"></span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;h4 id=&quot;查看命名空间列表：&quot;&gt;&lt;a href=&quot;#查看命名空间列表：&quot; class=&quot;headerlink&quot; title=&quot;查看命名空间列表：&quot;&gt;&lt;/a&gt;查看命名空间列表：&lt;/h4&gt;&lt;p&gt;命名空间 &lt;code&gt;keda&lt;/code&gt; 一直处于&lt;code&gt;Terminati</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="namespace" scheme="https://imszz.com/tags/namespace/"/>
    
  </entry>
  
  <entry>
    <title>kubernetes部署skywalking集群包括Java服务接入</title>
    <link href="https://imszz.com/p/f1160bc8/"/>
    <id>https://imszz.com/p/f1160bc8/</id>
    <published>2024-05-29T02:00:00.000Z</published>
    <updated>2024-05-29T08:00:00.000Z</updated>
    
    <content type="html"><![CDATA[<h4 id="1-概述："><a href="#1-概述：" class="headerlink" title="1 概述："></a>1 概述：</h4><h5 id="1-1-环境"><a href="#1-1-环境" class="headerlink" title="1.1 环境"></a>1.1 环境</h5><p>版本信息如下：<br>`<br>a、操作系统：centos 7.9</p><p>a、skywalking版本：v9.0.1</p><p>c、kubernetes版本：v1.22.0</p><p>d、es版本：6.8.6</p><p>e、helm版本： helm3.8</p><p>`</p><h5 id="1-2-skywalking概述"><a href="#1-2-skywalking概述" class="headerlink" title="1.2 skywalking概述"></a>1.2 skywalking概述</h5><p>1.2.1 skywalking是什么<br><code>SkyWalking</code>是一个开源的APM系统，为云原生分布式系统提供监控、链路追踪、诊断能力，支持集成多种编程语言的应用<code>（java、php、go、lua等）</code>，也能和服务网格进行集成。除了支持代码侵入方式的集成，一个主要亮点也支持零代码入侵的集成（零代码侵入是和具体的编程语言相关的），是利用java agent的特性在jvm级别修改了运行时的程序，因此程序员在代码编辑期间不需要修改业务代码也能达到埋点的效果。后端存储支持<code>es、mysql、tidb</code>等多种数据库。</p><p>架构图如下：<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking.png" alt="github--lena"></p><p><code>1.2.1</code> skywalking的java代理的使用</p><p><code>1）</code>方式1：命令行方式</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">java \</span><br><span class="line">-javaagent:/root/skywalking/agent/skywalking-agent.jar \</span><br><span class="line">-Dskywalking.agent.service_name=app1 \</span><br><span class="line">-Dskywalking.collector.backend_service=localhost:11800 \</span><br><span class="line">-jar myapp.jar</span><br></pre></td></tr></table></figure><p><code>2）</code>方式2：环境变量方式</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">export SW_AGENT_COLLECTOR_BACKEND_SERVICES=10.0.0.1:11800,10.0.0.2:11800</span><br><span class="line">export SW_AGENT_NAME=demo1</span><br><span class="line">export JAVA_OPTS=-javaagent:/root/skywalking/agent/skywalking-agent.jar</span><br><span class="line"></span><br><span class="line">java \</span><br><span class="line"><span class="meta">$</span><span class="bash">JAVA_OPTS \</span></span><br><span class="line"><span class="bash">-jar myapp.jar</span></span><br></pre></td></tr></table></figure><h4 id="2-部署前置条件："><a href="#2-部署前置条件：" class="headerlink" title="2 部署前置条件："></a>2 部署前置条件：</h4><p>具备一个k8s集群：<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking-k8s.png" alt="github--lena"></p><h4 id="3-部署："><a href="#3-部署：" class="headerlink" title="3 部署："></a>3 部署：</h4><h5 id="3-1-部署es集群"><a href="#3-1-部署es集群" class="headerlink" title="3.1 部署es集群"></a>3.1 部署es集群</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; elasticsearch-deployment.yaml &lt; EOF</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: StatefulSet</span><br><span class="line">metadata:</span><br><span class="line">  name: elasticsearch</span><br><span class="line">  namespace: elastic</span><br><span class="line">spec:</span><br><span class="line">  replicas: 3</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: elasticsearch</span><br><span class="line">  serviceName: elasticsearch</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      creationTimestamp: null</span><br><span class="line">      labels:</span><br><span class="line">        app: elasticsearch</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - env:</span><br><span class="line">        - name: cluster.name</span><br><span class="line">          value: k8s-logs</span><br><span class="line">        - name: node.name</span><br><span class="line">          valueFrom:</span><br><span class="line">            fieldRef:</span><br><span class="line">              apiVersion: v1</span><br><span class="line">              fieldPath: metadata.name</span><br><span class="line">        - name: discovery.zen.ping.unicast.hosts</span><br><span class="line">          value: elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch</span><br><span class="line">        - name: discovery.zen.minimum_master_nodes</span><br><span class="line">          value: &quot;2&quot;</span><br><span class="line">        - name: ES_JAVA_OPTS</span><br><span class="line">          value: -Xms512m -Xmx512m</span><br><span class="line">        image: docker.elastic.co/elasticsearch/elasticsearch:6.8.6</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: elasticsearch</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 9200</span><br><span class="line">          name: rest</span><br><span class="line">          protocol: TCP</span><br><span class="line">        - containerPort: 9300</span><br><span class="line">          name: inter-node</span><br><span class="line">          protocol: TCP</span><br><span class="line">        resources:</span><br><span class="line">          limits:</span><br><span class="line">            cpu: &quot;1&quot;</span><br><span class="line">          requests:</span><br><span class="line">            cpu: 100m</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - mountPath: /usr/share/elasticsearch/data</span><br><span class="line">          name: elasticsearch-data-pvc</span><br><span class="line">      initContainers:</span><br><span class="line">      - command:</span><br><span class="line">        - sh</span><br><span class="line">        - -c</span><br><span class="line">        - chown -R 1000:1000 /usr/share/elasticsearch/data</span><br><span class="line">        image: busybox</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: fix-permissions</span><br><span class="line">        securityContext:</span><br><span class="line">          privileged: true</span><br><span class="line">        terminationMessagePath: /dev/termination-log</span><br><span class="line">        terminationMessagePolicy: File</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - mountPath: /usr/share/elasticsearch/data</span><br><span class="line">          name:  elasticsearch-data-pvc</span><br><span class="line">      - command:</span><br><span class="line">        - sysctl</span><br><span class="line">        - -w</span><br><span class="line">        - vm.max_map_count=262144</span><br><span class="line">        image: busybox</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: increase-vm-max-map</span><br><span class="line">        resources: &#123;&#125;</span><br><span class="line">        securityContext:</span><br><span class="line">          privileged: true</span><br><span class="line">        terminationMessagePath: /dev/termination-log</span><br><span class="line">        terminationMessagePolicy: File</span><br><span class="line">      - command:</span><br><span class="line">        - sh</span><br><span class="line">        - -c</span><br><span class="line">        - ulimit -n 65536</span><br><span class="line">        image: busybox</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: increase-fd-ulimit</span><br><span class="line">        resources: &#123;&#125;</span><br><span class="line">        securityContext:</span><br><span class="line">          privileged: true</span><br><span class="line">     #  volumes:</span><br><span class="line">  volumeClaimTemplates:</span><br><span class="line">    - metadata:</span><br><span class="line">        name: elasticsearch-data-pvc # 这里不要修改，进阶用法参考 ECK 官方文档</span><br><span class="line">      spec:</span><br><span class="line">        accessModes:</span><br><span class="line">          - ReadWriteOnce</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            storage: 100Gi # 配置默认大小，allowVolumeExpansion为true后续可以扩展</span><br><span class="line">        storageClassName: elasticsearch-nfs-sc</span><br><span class="line">      #- emptyDir: &#123;&#125;</span><br><span class="line">      #  name: data</span><br><span class="line">      #- name: data</span><br><span class="line">      #  persistentVolumeClaim:</span><br><span class="line">      #    claimName: elasticsearch-data-pvc</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: Service</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: elasticsearch</span><br><span class="line">  namespace: elastic</span><br><span class="line">  labels:</span><br><span class="line">    app: elasticsearch</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    app: elasticsearch</span><br><span class="line">  clusterIP: None</span><br><span class="line">  ports:</span><br><span class="line">    - port: 9200</span><br><span class="line">      name: rest</span><br><span class="line">    - port: 9300</span><br><span class="line">      name: inter-node</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: Service</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: elasticsearch-logging</span><br><span class="line">  namespace: elastic</span><br><span class="line">  labels:</span><br><span class="line">    app: elasticsearch</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    app: elasticsearch</span><br><span class="line">  ports:</span><br><span class="line">    - port: 9200</span><br><span class="line">      name: external</span><br><span class="line">      </span><br><span class="line">EOF                                                                                                                        ```                    </span><br><span class="line">```shell</span><br><span class="line">cat &gt; elasticsearch-data-sc.yaml &lt; EOF</span><br><span class="line">apiVersion: storage.k8s.io/v1</span><br><span class="line">kind: StorageClass</span><br><span class="line">metadata:</span><br><span class="line">  name: elasticsearch-nfs-sc</span><br><span class="line">provisioner: fuseim.pri/ifs</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">cat &gt;  elasticsearch-pvc.yaml</span><br><span class="line"> &lt; EOF</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  name: elasticsearch-data-pvc</span><br><span class="line">  namespace: elastic</span><br><span class="line">spec:</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteMany</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 100Gi</span><br><span class="line">  storageClassName: elasticsearch-nfs-sc</span><br><span class="line">status:</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteMany</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 100Gi</span><br><span class="line"></span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><p><code>es 集群地址：主机IP+port</code><br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking-k8s-es.png" alt="github--lena"></p><h5 id="3-2-部署skywalking集群"><a href="#3-2-部署skywalking集群" class="headerlink" title="3.2 部署skywalking集群"></a>3.2 部署skywalking集群</h5><p>从<code>github</code>中下载<code>skywalking</code>的<code>chart</code>包仓库</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">cd /tmp</span><br><span class="line">git clone https://github.com/apache/skywalking-kubernetes</span><br><span class="line">cd /tmp/skywalking-kubernetes/chart</span><br></pre></td></tr></table></figure><p>由于已存在es集群，因此不需要再通过helm去部署es。可把chart包依赖的es chart注释掉。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">vim skywalking&#x2F;Chart.yaml</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking-k8s-es-chart.png" alt="github--lena"><br>执行helm命令部署skywalking集群。在第一步，我已经在kube-system名字空间下部署了es集群，因此skywalking连接的es集群是：<code>elasticsearch-logging.kube-system:9200</code>。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">export SKYWALKING_RELEASE_NAME=skywalking</span><br><span class="line">export SKYWALKING_RELEASE_NAMESPACE=skywalking</span><br><span class="line">kubectl create ns $SKYWALKING_RELEASE_NAMESPACE</span><br><span class="line"></span><br><span class="line">helm install &quot;$SKYWALKING_RELEASE_NAME&quot; ./skywalking \</span><br><span class="line">  -n &quot;$SKYWALKING_RELEASE_NAMESPACE&quot; \</span><br><span class="line">  --set oap.image.tag=9.1.0 \</span><br><span class="line">  --set oap.storageType=elasticsearch \</span><br><span class="line">  --set oap.service.type=NodePort \</span><br><span class="line">  --set oap.javaOpts=&quot;-Xmx4g -Xms4g&quot; \</span><br><span class="line">  --set ui.image.tag=9.1.0 \</span><br><span class="line">  --set ui.service.type=NodePort \</span><br><span class="line">  --set elasticsearch.enabled=false \</span><br><span class="line">  --set elasticsearch.config.host=elasticsearch-logging.elastic \</span><br><span class="line">  --set elasticsearch.config.port.http=9200 \</span><br><span class="line">  --set elasticsearch.config.user=&quot;&quot; \</span><br><span class="line">  --set elasticsearch.config.password=&quot;&quot;  </span><br></pre></td></tr></table></figure><p>查看svc和pod，可见部署skywalking成功：<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking-k8s-es-pod.png" alt="github--lena"></p><h5 id="3-3-制作skywalking-agent的init容器"><a href="#3-3-制作skywalking-agent的init容器" class="headerlink" title="3.3 制作skywalking agent的init容器"></a>3.3 制作skywalking agent的init容器</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">mkdir skywalking-java-agent &amp;&amp; cd skywalking-java-agent</span><br><span class="line">wget  https://dlcdn.apache.org/skywalking/java-agent/8.12.0/apache-skywalking-java-agent-8.12.0.tgz</span><br><span class="line">tar -xvf apache-skywalking-java-agent-8.12.0.tgz</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">cat &gt; Dockerfile &lt; EOF</span><br><span class="line">FROM busybox:latest</span><br><span class="line">ENV LANG=C.UTF-8</span><br><span class="line">RUN set -eux &amp;&amp; mkdir -p /opt/skywalking/agent/</span><br><span class="line">ADD skywalking-agent /opt/skywalking/agent/</span><br><span class="line">WORKDIR /</span><br><span class="line">EOF</span><br><span class="line"></span><br><span class="line">执行docker build命令制作镜像，并推送至仓库。</span><br><span class="line">docker build -t registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.0.1 .</span><br><span class="line">docker push registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.0.1</span><br><span class="line"></span><br></pre></td></tr></table></figure><h4 id="4-部署springboot微服务"><a href="#4-部署springboot微服务" class="headerlink" title="4 部署springboot微服务"></a>4 部署springboot微服务</h4><p><code>1）</code>微服务来自网上，并做了一些修改。微服务几乎没有业务逻辑，只有http调用和睡眠指令。<br><code>2）</code>我的业务服务部署在另外一个k8s集群中，因此skywalking agent访问的是位于另一个集群中的skywalking oap服务的NodePort。<br><code>3）</code>每个yaml文件都可以直接使用，需要根据实际情况修改环境变量<code>SW_AGENT_COLLECTOR_BACKEND_SERVICES</code>。在我的例子中<code>SW_AGENT_COLLECTOR_BACKEND_SERVICES=192.9.30.230:32297</code>。</p><h5 id="4-1-UI服务"><a href="#4-1-UI服务" class="headerlink" title="4.1 UI服务"></a>4.1 UI服务</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; acme-financial-ui.yaml &lt; EOF</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-ui</span><br><span class="line">  name: acme-financial-ui</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: acme-financial-ui</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: acme-financial-ui</span><br><span class="line">    spec:</span><br><span class="line">      initContainers:</span><br><span class="line">      - image: registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.1.0</span><br><span class="line">        name: skywalking-sidecar</span><br><span class="line">        command: [&quot;sh&quot;]</span><br><span class="line">        args: [</span><br><span class="line">                &quot;-c&quot;,</span><br><span class="line">                &quot;mkdir -p /opt/sw/agent &amp;&amp; cp -rf /opt/skywalking/agent/* /opt/sw/agent/&quot;</span><br><span class="line">        ]</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      containers:</span><br><span class="line">      - env:</span><br><span class="line">        - name: JAVA_OPTS</span><br><span class="line">          value: &quot;-javaagent:/opt/sw/agent/skywalking-agent.jar&quot;</span><br><span class="line">        - name: SW_AGENT_NAME</span><br><span class="line">          value: &quot;acme-financial-ui&quot;</span><br><span class="line">        - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES</span><br><span class="line">          value: &quot;192.9.30.230:32297&quot;</span><br><span class="line">        image: registry.cn-shenzhen.aliyuncs.com/gzlj/acme-financial-ui:v0.1</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: ui</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 8081</span><br><span class="line">          protocol: TCP</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      volumes:</span><br><span class="line">      - name: sw-agent</span><br><span class="line">        emptyDir: &#123;&#125;</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-ui</span><br><span class="line">  name: acme-financial-ui</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: http</span><br><span class="line">    port: 8081</span><br><span class="line">    protocol: TCP</span><br><span class="line">    targetPort: 8081</span><br><span class="line">  selector:</span><br><span class="line">    app: acme-financial-ui</span><br><span class="line">  sessionAffinity: None</span><br><span class="line">  type: NodePort</span><br><span class="line"></span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h5 id="4-2-office服务"><a href="#4-2-office服务" class="headerlink" title="4.2 office服务"></a>4.2 office服务</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; acme-financial-office.yaml &lt; EOF</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-office</span><br><span class="line">  name: acme-financial-office</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: acme-financial-office</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: acme-financial-office</span><br><span class="line">    spec:</span><br><span class="line">      initContainers:</span><br><span class="line">      - image: registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.1.0</span><br><span class="line">        name: skywalking-sidecar</span><br><span class="line">        command: [&quot;sh&quot;]</span><br><span class="line">        args: [</span><br><span class="line">                &quot;-c&quot;,</span><br><span class="line">                &quot;mkdir -p /opt/sw/agent &amp;&amp; cp -rf /opt/skywalking/agent/* /opt/sw/agent/&quot;</span><br><span class="line">        ]</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      containers:</span><br><span class="line">      - env:</span><br><span class="line">        - name: JAVA_OPTS</span><br><span class="line">          value: &quot;-javaagent:/opt/sw/agent/skywalking-agent.jar&quot;</span><br><span class="line">        - name: SW_AGENT_NAME</span><br><span class="line">          value: &quot;acme-financial-office&quot;</span><br><span class="line">        - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES</span><br><span class="line">          value: &quot;192.9.30.230:32297&quot;</span><br><span class="line">        image: registry.cn-shenzhen.aliyuncs.com/gzlj/acme-financial-office:v0.1</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: office</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 8082</span><br><span class="line">          protocol: TCP</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      volumes:</span><br><span class="line">      - name: sw-agent</span><br><span class="line">        emptyDir: &#123;&#125;</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-office</span><br><span class="line">  name: acme-financial-back-office</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: http</span><br><span class="line">    port: 8082</span><br><span class="line">    protocol: TCP</span><br><span class="line">    targetPort: 8082</span><br><span class="line">  selector:</span><br><span class="line">    app: acme-financial-office</span><br><span class="line">  sessionAffinity: None</span><br><span class="line">  type: ClusterIP </span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h5 id="4-3-account服务"><a href="#4-3-account服务" class="headerlink" title="4.3 account服务"></a>4.3 account服务</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; acme-financial-account.yaml &lt; EOF</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-account</span><br><span class="line">  name: acme-financial-account</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  revisionHistoryLimit: 10</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: acme-financial-account</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: acme-financial-account</span><br><span class="line">    spec:</span><br><span class="line">      initContainers:</span><br><span class="line">      - image: registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.1.0</span><br><span class="line">        name: skywalking-sidecar</span><br><span class="line">        command: [&quot;sh&quot;]</span><br><span class="line">        args: [</span><br><span class="line">                &quot;-c&quot;,</span><br><span class="line">                &quot;mkdir -p /opt/sw/agent &amp;&amp; cp -rf /opt/skywalking/agent/* /opt/sw/agent/&quot;</span><br><span class="line">        ]</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      containers:</span><br><span class="line">      - env:</span><br><span class="line">        - name: JAVA_OPTS</span><br><span class="line">          value: &quot;-javaagent:/opt/sw/agent/skywalking-agent.jar&quot;</span><br><span class="line">        - name: SW_AGENT_NAME</span><br><span class="line">          value: &quot;acme-financial-account&quot;</span><br><span class="line">        - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES</span><br><span class="line">          value: &quot;192.9.30.230:32297&quot;</span><br><span class="line">        image: registry.cn-shenzhen.aliyuncs.com/gzlj/acme-financial-account:v0.1</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: account</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 8083</span><br><span class="line"></span><br><span class="line">          protocol: TCP</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      volumes:</span><br><span class="line">      - name: sw-agent</span><br><span class="line">        emptyDir: &#123;&#125;</span><br><span class="line">---</span><br><span class="line"></span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-account</span><br><span class="line">  name: acme-financial-account</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: http</span><br><span class="line">    port: 8083</span><br><span class="line">    protocol: TCP</span><br><span class="line">    targetPort: 8083</span><br><span class="line">  selector:</span><br><span class="line">    app: acme-financial-account</span><br><span class="line">  sessionAffinity: None</span><br><span class="line">  type: ClusterIP</span><br><span class="line"></span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h5 id="4-4-customer服务"><a href="#4-4-customer服务" class="headerlink" title="4.4 customer服务"></a>4.4 customer服务</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; acme-financial-customer.yaml &lt; EOF</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-customer</span><br><span class="line">  name: acme-financial-customer</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: acme-financial-customer</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: acme-financial-customer</span><br><span class="line">    spec:</span><br><span class="line">      initContainers:</span><br><span class="line">      - image: registry.cn-hangzhou.aliyuncs.com/k8s_beijing/skywalking-agent:9.1.0</span><br><span class="line">        name: skywalking-sidecar</span><br><span class="line">        command: [&quot;sh&quot;]</span><br><span class="line">        args: [</span><br><span class="line">                &quot;-c&quot;,</span><br><span class="line">                &quot;mkdir -p /opt/sw/agent &amp;&amp; cp -rf /opt/skywalking/agent/* /opt/sw/agent/&quot;</span><br><span class="line">        ]</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      containers:</span><br><span class="line">      - env:</span><br><span class="line">        - name: JAVA_OPTS</span><br><span class="line">          value: &quot;-javaagent:/opt/sw/agent/skywalking-agent.jar&quot;</span><br><span class="line">        - name: SW_AGENT_NAME</span><br><span class="line">          value: &quot;acme-financial-customer&quot;</span><br><span class="line">        - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES</span><br><span class="line">          value: &quot;192.9.30.230:32297&quot;</span><br><span class="line">        image: registry.cn-shenzhen.aliyuncs.com/gzlj/acme-financial-customer:v0.1</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        name: customer</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 8084</span><br><span class="line">          protocol: TCP</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: sw-agent</span><br><span class="line">          mountPath: /opt/sw/agent</span><br><span class="line">      volumes:</span><br><span class="line">      - name: sw-agent</span><br><span class="line">        emptyDir: &#123;&#125;</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  labels:</span><br><span class="line">    app: acme-financial-customer</span><br><span class="line">  name: acme-financial-customer</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: http</span><br><span class="line">    port: 8084</span><br><span class="line">    protocol: TCP</span><br><span class="line">    targetPort: 8084</span><br><span class="line">  selector:</span><br><span class="line">    app: acme-financial-customer</span><br><span class="line">  sessionAffinity: None</span><br><span class="line">  type: ClusterIP</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h5 id="4-5-ingress"><a href="#4-5-ingress" class="headerlink" title="4.5 ingress"></a>4.5 ingress</h5><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; acme-ingress.yaml &lt; EOF</span><br><span class="line">apiVersion: networking.k8s.io/v1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: skywalking-ingress</span><br><span class="line">  namespace: default</span><br><span class="line">  annotations:</span><br><span class="line">    prometheus.io/http_probe: &quot;true&quot;</span><br><span class="line">spec:</span><br><span class="line">  ingressClassName: nginx</span><br><span class="line">  rules:</span><br><span class="line">  - host: acme.k8s.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - path: /</span><br><span class="line">        pathType: Prefix</span><br><span class="line">        backend:</span><br><span class="line">          service:</span><br><span class="line">            name: acme-financial-ui</span><br><span class="line">            port:</span><br><span class="line">              number: 8081</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h5 id="4-6-业务微服务部署结果"><a href="#4-6-业务微服务部署结果" class="headerlink" title="4.6 业务微服务部署结果"></a>4.6 业务微服务部署结果</h5><p>部署业务服务成功，如图所示，UI服务的NodePort为32468。<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/skywalking-k8s-ui.png" alt="github--lena"></p><h4 id="5-访问springboot业务微服务并查看skywalking"><a href="#5-访问springboot业务微服务并查看skywalking" class="headerlink" title="5 访问springboot业务微服务并查看skywalking"></a>5 访问springboot业务微服务并查看skywalking</h4><h5 id="5-1-访问UI服务的三个接口"><a href="#5-1-访问UI服务的三个接口" class="headerlink" title="5.1 访问UI服务的三个接口"></a>5.1 访问UI服务的三个接口</h5><p>通过<code>NodePort</code>或者<code>ingress</code>域名 访问UI服务的三个接口：<code>/hello、/start、/readtimeout</code>。</p>]]></content>
    
    
      
      
    <summary type="html">&lt;h4 id=&quot;1-概述：&quot;&gt;&lt;a href=&quot;#1-概述：&quot; class=&quot;headerlink&quot; title=&quot;1 概述：&quot;&gt;&lt;/a&gt;1 概述：&lt;/h4&gt;&lt;h5 id=&quot;1-1-环境&quot;&gt;&lt;a href=&quot;#1-1-环境&quot; class=&quot;headerlink&quot; title=&quot;1</summary>
      
    
    
    
    <category term="kubernetes" scheme="https://imszz.com/categories/kubernetes/"/>
    
    
    <category term="kubernetes" scheme="https://imszz.com/tags/kubernetes/"/>
    
    <category term="skywalking" scheme="https://imszz.com/tags/skywalking/"/>
    
  </entry>
  
  <entry>
    <title>XFS文件系统挂载报错</title>
    <link href="https://imszz.com/p/b1f8a123/"/>
    <id>https://imszz.com/p/b1f8a123/</id>
    <published>2024-02-20T08:00:00.000Z</published>
    <updated>2024-02-20T08:00:00.000Z</updated>
    
    <content type="html"><![CDATA[<p><strong>Linux 系统中 xfs 分区挂载错误：</strong></p><p><strong>错误提示：</strong></p><p><code>mount: /mnt: wrong fs type, bad option, bad superblock on /dev/vdc1, missing codepage or helper program, or other error.</code></p><p><strong>主要场景：</strong></p><p>该错误通常在挂载 xfs 类型分区时发生，尤其是在要挂载的磁盘与已挂载磁盘（例如系统盘或数据盘）的磁盘 ID（UUID）冲突时。</p><p><strong>解决办法：</strong></p><p><strong>1. 检查 UUID 冲突</strong></p><p>使用以下命令查询系统日志以检查 UUID 冲突：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">dmesg | tail</span><br></pre></td></tr></table></figure><p>如果出现以下提示，则表明存在 UUID 冲突：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">XFS (vdc1): Filesystem has duplicate UUID 60d67439-baf0-4c8b-94a3-3f10a362e8fe - can&#39;t mount</span><br></pre></td></tr></table></figure><p><strong>2. 使用 nouuid 选项进行临时挂载</strong></p><p>如果存在 UUID 冲突，可以使用 <code>nouuid</code> 选项进行临时挂载：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mount -o nouuid /dev/vdc1 /abc</span><br></pre></td></tr></table></figure><p>其中，<code>/dev/vdc1</code> 是要挂载的磁盘分区，<code>/abc</code> 是挂载点。</p><p>此操作将成功挂载磁盘分区，但重启后挂载会失效。</p><p><strong>3. 永久挂载</strong></p><p>要永久挂载，需要使用 <code>xfs_admin</code> 命令为新分区分配一个新的 UUID：</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo xfs_admin -U generate /dev/vdc1</span><br></pre></td></tr></table></figure><p>其中，<code>/dev/vdc1</code> 是要更改其 UUID 的磁盘分区。</p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;&lt;strong&gt;Linux 系统中 xfs 分区挂载错误：&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;错误提示：&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;mount: /mnt: wrong fs type, bad option, bad superblock </summary>
      
    
    
    
    <category term="linux" scheme="https://imszz.com/categories/linux/"/>
    
    
    <category term="linux" scheme="https://imszz.com/tags/linux/"/>
    
  </entry>
  
  <entry>
    <title>ceph运维操作</title>
    <link href="https://imszz.com/p/3e15b5bb/"/>
    <id>https://imszz.com/p/3e15b5bb/</id>
    <published>2022-01-03T12:46:25.000Z</published>
    <updated>2022-01-03T16:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h3 id="一-统一节点上ceph-conf文件"><a href="#一-统一节点上ceph-conf文件" class="headerlink" title="一 统一节点上ceph.conf文件"></a>一 统一节点上ceph.conf文件</h3><p>如果是在admin节点修改的ceph.conf，想推送到所有其他节点，则需要执行下述命令</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph-deploy --overwrite-conf config push mon01 mon02 mon03 osd01 osd02 osd03</span><br></pre></td></tr></table></figure><p>修改完毕配置文件后需要重启服务生效，请看下一小节</p><h3 id="二-ceph集群服务管理"><a href="#二-ceph集群服务管理" class="headerlink" title="二 ceph集群服务管理"></a>二 ceph集群服务管理</h3><div class="note warning flat"><p>!!!下述操作均需要在具体运行服务的那个节点上运行，而不是admin节点!!!</p></div><h4 id="2-1-方式一"><a href="#2-1-方式一" class="headerlink" title="2.1 方式一"></a>2.1 方式一</h4><p>在各具体节点执行下行命令，重启节点上的所有ceph守护进程</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph.target</span><br></pre></td></tr></table></figure><h4 id="2-2-方式二"><a href="#2-2-方式二" class="headerlink" title="2.2 方式二"></a>2.2 方式二</h4><p>在各具体节点执行下行命令，按类型重启相应的守护进程</p><h5 id="1、重启-mgr-守护进程"><a href="#1、重启-mgr-守护进程" class="headerlink" title="1、重启 mgr 守护进程"></a>1、重启 mgr 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-mgr.target</span><br><span class="line"></span><br></pre></td></tr></table></figure><h5 id="2、重启-mds-守护进程"><a href="#2、重启-mds-守护进程" class="headerlink" title="2、重启 mds 守护进程"></a>2、重启 mds 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-mds.target</span><br></pre></td></tr></table></figure><h5 id="3、重启-rgw-守护进程"><a href="#3、重启-rgw-守护进程" class="headerlink" title="3、重启 rgw 守护进程"></a>3、重启 rgw 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-radosgw.target</span><br></pre></td></tr></table></figure><h5 id="4、重启-mon-守护进程"><a href="#4、重启-mon-守护进程" class="headerlink" title="4、重启 mon 守护进程"></a>4、重启 mon 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-mon.target</span><br></pre></td></tr></table></figure><h5 id="5、重启-osd-守护进程"><a href="#5、重启-osd-守护进程" class="headerlink" title="5、重启 osd 守护进程"></a>5、重启 osd 守护进程</h5><p>登录到osd01节点上，该节点上运行有三个osd daemon进程osd.0、osd.l、osd.2</p><h6 id="5-1-重启所有的osd-daemoon"><a href="#5-1-重启所有的osd-daemoon" class="headerlink" title="5.1 重启所有的osd daemoon"></a>5.1 重启所有的osd daemoon</h6><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-osd.target </span><br></pre></td></tr></table></figure><h6 id="5-2-挨个重启"><a href="#5-2-挨个重启" class="headerlink" title="5.2 挨个重启"></a>5.2 挨个重启</h6><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">systemctl restart ceph-osd@0</span><br><span class="line">systemctl restart ceph-osd@1</span><br><span class="line">systemctl restart ceph-osd@2</span><br></pre></td></tr></table></figure><p>了解：也可以根据进程类型+主机名.service</p><h5 id="1-mon-守护进程"><a href="#1-mon-守护进程" class="headerlink" title="1 mon 守护进程"></a>1 mon 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">systemctl &#123; start | stop | restart&#125; ceph-mon@&#123;mon_instance&#125;.service</span><br><span class="line">例</span><br><span class="line">systemctl restart ceph-mon@mon01.service</span><br></pre></td></tr></table></figure><h5 id="2-mgr-守护进程"><a href="#2-mgr-守护进程" class="headerlink" title="2 mgr 守护进程"></a>2 mgr 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl &#123; start | stop | restart&#125; ceph-mgr@&#123;mgr_instance&#125;.service</span><br></pre></td></tr></table></figure><h5 id="3-osd-守护进程"><a href="#3-osd-守护进程" class="headerlink" title="3 osd 守护进程"></a>3 osd 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl &#123; start | stop | restart&#125; ceph-osd@&#123;osd_instance&#125;.service</span><br></pre></td></tr></table></figure><h5 id="4-rgw-守护进程"><a href="#4-rgw-守护进程" class="headerlink" title="4 rgw 守护进程"></a>4 rgw 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl &#123; start | stop | restart&#125; ceph-radosgw@&#123;rgw_instance&#125;.service</span><br></pre></td></tr></table></figure><h5 id="5-mds-守护进程"><a href="#5-mds-守护进程" class="headerlink" title="5 mds 守护进程"></a>5 mds 守护进程</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">systemctl &#123; start | stop | restart&#125; ceph-mds@&#123;mds_instance&#125;.service</span><br></pre></td></tr></table></figure><h3 id="三-服务平滑重启"><a href="#三-服务平滑重启" class="headerlink" title="三 服务平滑重启"></a>三 服务平滑重启</h3><p>有时候需要更改服务的配置，但不想重启服务，或者是临时修改，此时我们就可以通过admin sockets直接与守护进程交互。如查看和修改守护进程的配置参数。<br>守护进程的socket文件一般是/var/run/ceph/$cluster-$type.$id.asok<br>基于admin sockets的操作：</p><ul><li>方式一：tell子命令</li><li>方式二：daemon子命令<br>ceph daemon $type.$id command</li><li>方式三：通过socket文件</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph --admin-daemon &#x2F;var&#x2F;run&#x2F;ceph&#x2F;$cluster-$type.$id.asok command</span><br></pre></td></tr></table></figure><p>常用command如下</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">help</span><br><span class="line"></span><br><span class="line">config get parameter</span><br><span class="line"></span><br><span class="line">config set parameter</span><br><span class="line"></span><br><span class="line">config show</span><br><span class="line"></span><br><span class="line">perf dump</span><br></pre></td></tr></table></figure><h4 id="3-1-tell子命令"><a href="#3-1-tell子命令" class="headerlink" title="3.1 tell子命令"></a>3.1 tell子命令</h4><p>命令使用格式如下，在管理节点执行即可</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph tell &#123;daemon-type&#125;.&#123;daemon id or *&#125; injectargs --&#123;name&#125;&#x3D;&#123;value&#125; [--&#123;name&#125;&#x3D;&#123;value&#125;]</span><br></pre></td></tr></table></figure><ul><li>daemon-type：为要操作的对象类型如osd、mon等。</li><li>daemon id：该对象的名称，osd通常为0、1等，mon为ceph -s显示的名称，这里可以输入*表示全部。</li><li>injectargs：表示参数注入，后面必须跟一个参数，也可以跟多个。</li></ul><p>例如</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 在管理节点运行</span><br><span class="line">ceph tell mon.mon01 injectargs --mon_allow_pool_delete&#x3D;true</span><br><span class="line">ceph tell mon.* injectargs --mon_allow_pool_delete&#x3D;true</span><br></pre></td></tr></table></figure><p>mon_allow_pool_delete此选项的值默认为false，表示不允许删除pool，只有此选项打开后方可删除，记得改回去！！！ 这里使用mon.ceph-monitor-1表示只对ceph-monitor-1设置，可以使用*</p><h4 id="3-2-daemon子命令"><a href="#3-2-daemon子命令" class="headerlink" title="3.2 daemon子命令"></a>3.2 daemon子命令</h4><p>命令格式如下，需要登录到守护进程所在的那台主机上执行</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph daemon &#123;daemon-type&#125;.&#123;id&#125; config set &#123;name&#125;&#x3D;&#123;value&#125; </span><br></pre></td></tr></table></figure><p>例。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ssh root@mon01</span><br><span class="line">ceph daemon mon.mon01 config set mon_allow_pool_delete false </span><br></pre></td></tr></table></figure><h4 id="3-3-socket文件"><a href="#3-3-socket文件" class="headerlink" title="3.3 socket文件"></a>3.3 socket文件</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 1、查看帮助</span><br><span class="line"></span><br><span class="line">ceph --admin-daemon &#x2F;var&#x2F;run&#x2F;ceph&#x2F;ceph-mds.mon01.asok help</span><br><span class="line"></span><br><span class="line"># 2、查看配置项</span><br><span class="line">ceph --admin-daemon &#x2F;var&#x2F;run&#x2F;ceph&#x2F;ceph-mds.mon01.asok config get mon_allow_pool_delete</span><br><span class="line"></span><br><span class="line"># 3、设置</span><br><span class="line">ceph --admin-daemon &#x2F;var&#x2F;run&#x2F;ceph&#x2F;ceph-mds.mon01.asok config set mon_allow_pool_delete true</span><br></pre></td></tr></table></figure><p>如果超过半数的monitor节点挂掉，此时通过网络访问ceph的所有操作都会被阻塞，但monitor的本地socket还是可以通信的。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph --admin-daemon &#x2F;var&#x2F;run&#x2F;ceph&#x2F;ceph-mon.mon03.asok quorum_status</span><br></pre></td></tr></table></figure><h3 id="四-维护集群常用命令"><a href="#四-维护集群常用命令" class="headerlink" title="四 维护集群常用命令"></a>四 维护集群常用命令</h3><h4 id="4-1-查看集群健康状况"><a href="#4-1-查看集群健康状况" class="headerlink" title="4.1 查看集群健康状况"></a>4.1 查看集群健康状况</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 检查ceph的状态</span><br><span class="line">ceph -s</span><br><span class="line">ceph status</span><br><span class="line">ceph health</span><br><span class="line">ceph health detail</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 实时观察集群健康状态</span><br><span class="line">ceph -w</span><br></pre></td></tr></table></figure><h4 id="4-2-检查集群的使用情况"><a href="#4-2-检查集群的使用情况" class="headerlink" title="4.2 检查集群的使用情况"></a>4.2 检查集群的使用情况</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;命令1&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line">ceph df  # 它和 Linux 上的 df 相似</span><br><span class="line"></span><br><span class="line"># GLOBAL段</span><br><span class="line">展示了数据所占用集群存储空间的概要,详解如下</span><br><span class="line">SIZE: 集群的总容量;</span><br><span class="line">AVAIL: 集群的空闲空间总量;</span><br><span class="line">RAW USED: 已用存储空间总量;</span><br><span class="line">% RAW USED: 已用存储空间比率。用此值参照 full ratio 和 near full \ ratio 来确保不会用尽集群空间。 详情见存储容量。</span><br><span class="line"></span><br><span class="line"># POOLS 段:</span><br><span class="line">展示了存储池列表及各存储池的大致使用率。没有副本、克隆品和快照占用情况。例如，如果你 把 1MB 的数据存储为对象，理论使用率将是 1MB ，但考虑到副本数、克隆数、和快照数，实际使用率可能是 2MB 或更多。</span><br><span class="line">NAME: 存储池名字;</span><br><span class="line">ID: 存储池唯一标识符;</span><br><span class="line">USED: 大概数据量，单位为 B、KB、MB 或 GB ;</span><br><span class="line">%USED: 各存储池的大概使用率;</span><br><span class="line">Objects: 各存储池内的大概对象数。</span><br><span class="line"></span><br><span class="line">&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;命令2&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line">ceph osd df  # 可以详细列出集群每块磁盘的使用情况，包括大小、权重、使用多少空间、使用率等等</span><br></pre></td></tr></table></figure><h4 id="4-3-mds相关"><a href="#4-3-mds相关" class="headerlink" title="4.3 mds相关"></a>4.3 mds相关</h4><p>1、查看mds状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph mds stat</span><br><span class="line">ceph mds dump</span><br></pre></td></tr></table></figure><p>2、删除mds节点</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">ssh root@mon01 systemctl stop ceph-mds.target</span><br><span class="line">ceph mds rm 0  # 删除一个不活跃的mds</span><br><span class="line"></span><br><span class="line"># 启动mds后，则恢复正常</span><br></pre></td></tr></table></figure><p>3、关闭mds集群</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph mds cluster_down</span><br></pre></td></tr></table></figure><p>4、开启mds集群</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph mds cluster_up</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>5、设置cephfs 文件系统存储方式最大单个文件尺寸</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph mds set max_file_size 1024000000000</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>6、了解：清除mds文件系统</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"># 1、强制 mds 状态为 featrue</span><br><span class="line">ceph mds fail 0</span><br><span class="line"></span><br><span class="line"># 2、删除 mds 文件系统</span><br><span class="line">ceph fs rm cephfs --yes-i-really-mean-it</span><br><span class="line"></span><br><span class="line"># 3、删除数据池</span><br><span class="line">ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it</span><br><span class="line"></span><br><span class="line"># 4、删除元数据池</span><br><span class="line">ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it </span><br><span class="line"></span><br><span class="line"># 5、然后再删除 mds key,残留文件等</span><br><span class="line"></span><br><span class="line"># 6、最后删除不活跃的mds</span><br><span class="line">ceph mds rm 0 </span><br></pre></td></tr></table></figure><h4 id="4-4-mon相关"><a href="#4-4-mon相关" class="headerlink" title="4.4 mon相关"></a>4.4 mon相关</h4><p>1、查看mon状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph mon stat</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>2、查看mon映射信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph mon dump</span><br></pre></td></tr></table></figure><p>3、检查Ceph monitor仲裁/选举状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph quorum_status --format json-pretty</span><br></pre></td></tr></table></figure><p>4、查看mon信息包括ip地址</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">获得一个正在运行的 mon map，并保存在 1.txt 文件中 </span><br><span class="line">ceph mon getmap -o 1.txt</span><br><span class="line">monmaptool --print 1.txt</span><br></pre></td></tr></table></figure><h4 id="4-5-auth相关"><a href="#4-5-auth相关" class="headerlink" title="4.5 auth相关"></a>4.5 auth相关</h4><h5 id="一：认证与授权"><a href="#一：认证与授权" class="headerlink" title="一：认证与授权"></a>一：认证与授权</h5><p>Ceph使用cephx协议对客户端进行身份验证，集群中每一个Monitor节点都可以对客户端进行身份验证，所以不存在单点故障。cephx仅用于Ceph集群中的各组件，而不能用于非Ceph组件。它并不解决数据传输加密问题，但也可以提高访问控制安全性问题。</p><h5 id="二：认证授权流程如下"><a href="#二：认证授权流程如下" class="headerlink" title="二：认证授权流程如下"></a>二：认证授权流程如下</h5><ul><li><p>1、客户端向Monitor请求创建用户。</p></li><li><p>2、Monitor返回用户共享密钥给客户端，并将此用户信息共享给MDS和OSD。</p></li><li><p>3、客户端使用此共享密钥向Monitor进行认证。</p></li><li><p>4、Monitor返回一个session key给客户端，并且此session key与对应客户端密钥进行加密。此session key过一段时间后就会失效，需要重新请求。</p></li><li><p>5、客户端对此session key进行解密，如果密钥不匹配无法解密，这时候认证失败。</p></li><li><p>6、如果认证成功，客户端向服务器申请访问的令牌。</p></li><li><p>7、服务端返回令牌给客户端。</p></li><li><p>8、这时候客户端就可以拿着令牌访问到MDS和OSD，并进行数据的交互。因为MDS和Monitor之间有共享此用户的信息，所以当客户端拿到令牌后就可以直接访问。</p><h5 id="三：相关概念"><a href="#三：相关概念" class="headerlink" title="三：相关概念"></a>三：相关概念</h5><h6 id="1、用户"><a href="#1、用户" class="headerlink" title="1、用户"></a>1、用户</h6></li><li><p>用户通常指定个人或某个应用</p></li><li><p>个人就是指定实际的人，比如管理员</p></li><li><p>而应用就是指客户端或Ceph集群中的某个组件，通过用户可以控制谁可以如何访问Ceph集群中的哪块数据。</p></li><li><p>Ceph支持多种类型的用户，个人与某应用都属于client类型。还有mds、osd、mgr一些专用类型。</p><h6 id="2、用户标识"><a href="#2、用户标识" class="headerlink" title="2、用户标识"></a>2、用户标识</h6></li><li><p>用户标识由“TYPE.ID”组成，通常ID也代表用户名，如client.admin、osd.1等。</p></li></ul><h6 id="3、使能caps"><a href="#3、使能caps" class="headerlink" title="3、使能caps"></a>3、使能caps</h6><ul><li>使能表示用户可以行使的能力，通俗点也可以理解为用户所拥有的权限。 对于不同的对象所能使用的权限也不一样，大致如下所示。</li><li>Monitor权限有：r、w、x和allow、profile、cap。</li><li>OSD权限有：r、w、x、class-read、class-wirte和profile osd。</li><li>另外OSD还可以指定单个存储池或者名称空间，如果不指定存储池，默认为整个存储池。</li><li>MDS权限有：allow或者留空。</li></ul><blockquote><p> 关于各权限的意义：</p></blockquote><ul><li>allow：对mds表示rw的意思，其它的表示“允许”。</li><li>r：读取。</li><li>w：写入。</li><li>x：同时拥有读取和写入，相当于可以调用类方法，并且可以在monitor上面执行auth操作。</li><li>class-read：可以读取类方法，x的子集。</li><li>class-wirte：可以调用类方法，x的子集。</li><li>*：这个比较特殊，代表指定对象的所有权限。</li><li>profile：类似于Linux下sudo，比如profile osd表示授予用户以某个osd身份连接到其它OSD或者Monitor的权限。</li><li>profile bootstrap-osd表示授予用户引导OSD的权限，关于此处可查阅更多资料。</li></ul><h5 id="四-命令"><a href="#四-命令" class="headerlink" title="四 命令"></a>四 命令</h5><p>1、查看 ceph 集群中的认证用户及相关的 key</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth list  # 简写：ceph auth ls</span><br></pre></td></tr></table></figure><p>2、查看某一用户详细信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth get client.admin</span><br></pre></td></tr></table></figure><p>3、只查看用户的key信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth print-key client.admin</span><br></pre></td></tr></table></figure><p>4、创建用户，用户标识为client.test。指定该用户对mon有r的权限，对osd有rw的权限，osd没有指定存储池，所以是对所有存储池都有rw的权限。在创建用户的时候还会自动创建用户的密钥。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph auth add client.test mon &quot;allow r&quot; osd &quot;allow rw&quot; </span><br><span class="line"></span><br></pre></td></tr></table></figure><p>5、修改用户权限</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth caps client.test mon &quot;allow r&quot; osd &quot;allow rw pool&#x3D;kvm&quot;</span><br></pre></td></tr></table></figure><p>6、删除用户，用户名为osd.0</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth del osd.0</span><br></pre></td></tr></table></figure><p>7、keyring秘钥环文件<br>keyring文件是一个包含密码，key，证书等内容的一个集合。一个keyring文件可以包含多个用户的信息，也就是可以将多个用户信息存储到一个keyring文件。</p><h6 id="keyring自动加载顺序"><a href="#keyring自动加载顺序" class="headerlink" title="keyring自动加载顺序"></a>keyring自动加载顺序</h6><p>当访问Ceph集群时候默认会从以下四个地方加载keyring文件。</p><ul><li>/etc/ceph/cluster-name.user-name.keyring：通过这种类型的文件用来保存单个用户信息，文件名格式固定：集群名.用户标识.keyring。如ceph.client.admin.keyring。这个代表ceph这个集群，这里的ceph是集群名，而client.admin为admin用户的标识。</li><li>/etc/ceph/cluster.keyring：通用来保存多个用户的keyring信息。</li><li>/etc/ceph/keyring：也用来保存多个用户的keyring信息。</li><li>/etc/ceph/keyring.bin：二进制keyring文件，也用来保存多个用户的keyring信息。</li></ul><p>8、创建一个名为client.admin 的用户，设置好用户对mds、osd、mon的权限，然后把密钥导出到文件中</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">ceph auth get-or-create client.admin mds &#39;allow *&#39; osd &#39;allow *&#39; mon &#39;allow *&#39; &gt; &#x2F;etc&#x2F;ceph&#x2F;ceph.client.admin.keyring1</span><br><span class="line"></span><br><span class="line"># 或者</span><br><span class="line">ceph auth get-or-create client.admin mds &#39;allow *&#39; osd &#39;allow *&#39; mon &#39;allow *&#39; -o &#x2F;etc&#x2F;ceph&#x2F;ceph.client.admin.keyring1</span><br></pre></td></tr></table></figure><p>9、创建一个名为osd.0 的用户，设置好用户对mon、osd的权限，然后把密钥导出到文件中</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth get-or-create osd.0 mon &#39;allow profile osd&#39; osd &#39;allow *&#39; -o &#x2F;var&#x2F;lib&#x2F;ceph&#x2F;osd&#x2F;ceph-0&#x2F;keyring</span><br></pre></td></tr></table></figure><p>10、创建一个名为mds.nc3 的用户，设置好用户对mon、osd、mds的权限，然后把密钥导出到文件中</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph auth get-or-create mds.nc3 mon &#39;allow rwx&#39; osd &#39;allow *&#39; mds &#39;allow *&#39; -o &#x2F;var&#x2F;lib&#x2F;ceph&#x2F;mds&#x2F;ceph-cs1&#x2F;keyring</span><br></pre></td></tr></table></figure><h4 id="4-6-osd相关"><a href="#4-6-osd相关" class="headerlink" title="4.6 osd相关"></a>4.6 osd相关</h4><p>1、查看osd状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd stat</span><br></pre></td></tr></table></figure><p>2、查看osd树</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph osd tree查看</span><br><span class="line">ceph osd ls-tree rack1  # 查看osd tree中rack1下的osd编号</span><br></pre></td></tr></table></figure><p>3、查看osd映射信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd dump</span><br></pre></td></tr></table></figure><p>4、查看数据延迟</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd perf</span><br></pre></td></tr></table></figure><p>5、查看CRUSH map</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd crush dump</span><br></pre></td></tr></table></figure><p>6、查看与设置最大 osd daemon 的个数</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"># 查看</span><br><span class="line">[root@admin ~]# ceph  osd getmaxosd</span><br><span class="line">max_osd &#x3D; 12 in epoch 379</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 设置最大的 osd daemon的个数(当扩大 osd daemon的时候必须扩大这个值)</span><br><span class="line">ceph osd setmaxosd 2048</span><br></pre></td></tr></table></figure><p>7、设置 osd 的权重</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd reweight 3 0.5  # 把osd.3的权重改为0.5</span><br></pre></td></tr></table></figure><p>8、暂停 osd (暂停后整个ceph集群不再接收数据)</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pause  # 暂停的是所有的osd</span><br></pre></td></tr></table></figure><p>9、再次开启 osd (开启后再次接收数据)</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd unpause</span><br></pre></td></tr></table></figure><p>10、设置标志 flags ，不允许关闭 osd、解决网络不稳定，osd 状态不断切换的问题</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">ceph osd set nodown</span><br><span class="line">取消设置</span><br><span class="line">ceph osd unset nodown</span><br></pre></td></tr></table></figure><h4 id="4-7-pool相关"><a href="#4-7-pool相关" class="headerlink" title="4.7 pool相关"></a>4.7 pool相关</h4><p>1、创建存储池</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"># 语法：ceph osd pool create &lt;pool name&gt; &lt;pg num&gt; &lt;pgp num&gt; [type]</span><br><span class="line">pool name：存储池名称，必须唯一。</span><br><span class="line">pg num：存储池中的pg数量。</span><br><span class="line">pgp num：用于归置的pg数量，默认与pg数量相等。</span><br><span class="line">type：指定存储池的类型，有replicated和erasure， 默认为replicated。 </span><br><span class="line"></span><br><span class="line"># 例: 创建一个副本池</span><br><span class="line">ceph osd pool create egon_test 32 32  # 生路type，默认为replicated</span><br></pre></td></tr></table></figure><p>2、修改存储池的pg数</p><p>注意：在更改pool的PG数量时，需同时更改PGP的数量。PGP是为了管理placement而存在的专门的PG，它和PG的数量应该保持一致。如果你增加pool的pg_num，就需要同时增加pgp_num，保持它们大小一致，这样集群才能正常rebalancing。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool set egon_test pg_num 60  </span><br><span class="line">ceph osd pool set egon_test pgp_num 60  </span><br></pre></td></tr></table></figure><p>3、查看存储池</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 查看ceph集群中的pool数量</span><br><span class="line">ceph osd lspools</span><br><span class="line"></span><br><span class="line"># 查看名字与详情</span><br><span class="line">ceph osd pool ls</span><br><span class="line">ceph osd pool ls detail</span><br><span class="line"></span><br><span class="line"># 查看状态</span><br><span class="line">ceph osd pool stats</span><br></pre></td></tr></table></figure><p>4、重命名</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool rename &lt;old name&gt; &lt;new name&gt;</span><br></pre></td></tr></table></figure><p>5、在集群中删除一个 pool,注意删除 poolpool 映射的 image 会直接被删除，线上操作要谨慎<br>存储池的名字需要重复两次</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool delete tom_test tom_test --yes-i-really-really-mean-it</span><br><span class="line"></span><br><span class="line"># 删除时会报错：</span><br><span class="line">Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool</span><br><span class="line"></span><br><span class="line">这是由于没有配置mon节点的 mon_allow_pool_delete 字段所致，解决办法就是到mon节点进行相应的设置。</span><br><span class="line">解决方案：</span><br><span class="line"></span><br><span class="line"># &#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;方案1&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line">ceph tell mon.* injectargs --mon_allow_pool_delete&#x3D;true</span><br><span class="line">ceph osd pool delete tom_test tom_test --yes-i-really-really-mean-it</span><br><span class="line">删除完成后最好把mon_allow_pool_delete改回去，降低误删的风险</span><br><span class="line"></span><br><span class="line"># &#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;方案2&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line">如果是测试环境，想随意删除存储池，可以在配置文件中全局开启删除存储池的功能</span><br><span class="line"># 1、编辑配置文件： vi &#x2F;etc&#x2F;ceph&#x2F;ceph.conf</span><br><span class="line">在配置文件中添加如下内容：</span><br><span class="line">[mon]</span><br><span class="line">mon allow pool delete &#x3D; true</span><br><span class="line"></span><br><span class="line"># 2、推送配置文件</span><br><span class="line">ceph-deploy --overwrite-conf config push mon01 mon02 mon03 osd01 osd02 osd03</span><br><span class="line"></span><br><span class="line"># 3、重启ceph-mon服务：</span><br><span class="line">systemctl restart ceph-mon.target</span><br><span class="line"></span><br><span class="line"># 4、重新执行删除pool命令即可</span><br></pre></td></tr></table></figure><p>6、为一个 ceph pool </p><p>配置配额、达到配额前集群会告警，达到上限后无法再写入数据<br>当我们有很多存储池的时候，有些作为公共存储池，这时候就有必要为这些存储池做一些配额，限制可存放的文件数，或者空间大小，以免无限的增大影响到集群的正常运行。 设置配额。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"># 查看池配额设置</span><br><span class="line">ceph osd pool get-quota &#123;pool_name&#125;</span><br><span class="line"></span><br><span class="line"># 对对象个数进行配额</span><br><span class="line">ceph osd pool set-quota &#123;pool_name&#125; max_objects &#123;number&#125;</span><br><span class="line"></span><br><span class="line"># 对磁盘大小进行配额</span><br><span class="line">ceph osd pool set-quota &#123;pool_name&#125; max_bytes &#123;number&#125;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 例：</span><br><span class="line">ceph osd pool set-quota egon_test max_bytes 1000000000</span><br></pre></td></tr></table></figure><p>7、配置参数<br>对于存储池的配置参数可以通过下面命令获取。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool get &lt;pool name&gt; [key name]</span><br></pre></td></tr></table></figure><p>如</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool get &lt;pool name&gt; size </span><br></pre></td></tr></table></figure><p>如果不跟个key名称，会输出所有参数，但有个报错。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool set &lt;pool name&gt; &lt;key&gt; &lt;value&gt; </span><br></pre></td></tr></table></figure><p>如</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 修改pool的最大副本数与最小副本数</span><br><span class="line">ceph osd pool set egon_test min_size 1</span><br><span class="line">ceph osd pool set egon_test size 2</span><br></pre></td></tr></table></figure><blockquote><p>常用的可用配置参数有。</p></blockquote><ul><li>size：存储池中的对象副本数</li><li>min_size：提供服务所需要的最小副本数，如果定义size为3，min_size也为3，坏掉一个OSD，如果pool池中有副本在此块OSD上面，那么此pool将不提供服务，如果将min_size定义为2，那么还可以提供服务，如果提供为1，表示只要有一块副本都提供服务。</li><li>pg_num：定义PG的数量</li><li>pgp_num：定义归置时使用的PG数量</li><li>crush_ruleset：设置crush算法规则</li><li>nodelete：控制是否可删除，默认可以</li><li>nopgchange：控制是否可更改存储池的pg num和pgp num</li><li>nosizechange：控制是否可以更改存储池的大小</li><li>noscrub和nodeep-scrub：控制是否整理或深层整理存储池，可临时解决高I/O问题</li><li>scrub_min_interval：集群负载较低时整理存储池的最小时间间隔</li><li>scrub_max_interval：整理存储池的最大时间间隔</li></ul><p>8、快照<br>创建存储池快照需要大量的存储空间，取决于存储池的大小。 创建快照，以下两条命令都可以 。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool mksnap &lt;pool name&gt; &lt;snap name&gt;</span><br><span class="line">rados -p &lt;pool name&gt; mksnap &lt;snap name&gt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>列出快照。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p &lt;pool name&gt; lssnap </span><br></pre></td></tr></table></figure><p>回滚至存储池快照。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p &lt;pool name&gt; rollback &lt;obj-name&gt; &lt;snap name&gt;  # 只能回复某个对象</span><br></pre></td></tr></table></figure><p>删除存储池快照，以下两条命令都可以删除。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool rmsnap &lt;pool name&gt; &lt;snap name&gt; </span><br><span class="line">rados -p &lt;pool name&gt; rmsnap &lt;snap name&gt; </span><br></pre></td></tr></table></figure><blockquote><p>提示<br><code>Pool池的快照，相对来说是有局限性的，没办法直接恢复快照里边全部object对象文件，只能一个个来恢复，保存点密码文件应该还是可以的。这样的设计效果，猜测有可能是因为如果pool池直接整体恢复，会导致整个ceph集群数据混乱，毕竟集群中数据是分布式存放的！</code></p></blockquote><p>pool存储池快照功能了解即可，感兴趣详见《附录5：》</p><p>9、压缩<br>如果使用bulestore存储引擎，默认提供数据压缩，以节约磁盘空间。 启用压缩。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool set &lt;pool name&gt; compression_algorithm snappy</span><br></pre></td></tr></table></figure><p>snappy：压缩使用的算法，还有有none、zlib、lz4、zstd和snappy等算法。默认为sanppy。zstd压缩比好，但消耗CPU，lz4和snappy对CPU占用较低，不建议使用zlib。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool set &lt;pool name&gt; compression_mode aggressive</span><br><span class="line"></span><br><span class="line"># 例如</span><br><span class="line">ceph osd pool set egon_test compression_mode aggressive</span><br></pre></td></tr></table></figure><p>压缩的模式有none、aggressive、passive和force</p><ul><li>默认none。表示不压缩</li><li>passive表示提示COMPRESSIBLE才压缩</li><li>aggressive表示提示INCOMPRESSIBLE不压缩，其它都压缩</li><li>force表示始终压缩。 压缩参数。</li></ul><p>参数:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">compression_max_blob_size：压缩对象的最大体积，超过此体积不压缩。默认为0。</span><br><span class="line">compression_min_blob_size：压缩对象的最小体积，小于此体积不压缩。默认为0。 全局压缩选项，这些可以配置到ceph.conf配置文件，作用于所有存储池。</span><br><span class="line">bluestore_compression_algorithm</span><br><span class="line">bluestore_compression_mode</span><br><span class="line">bluestore_compression_required_ratio</span><br><span class="line">bluestore_compression_min_blob_size</span><br><span class="line">bluestore_compression_max_blob_size</span><br><span class="line">bluestore_compression_min_blob_size_ssd</span><br><span class="line">bluestore_compression_max_blob_size_ssd</span><br><span class="line">bluestore_compression_min_blob_size_hdd</span><br><span class="line">bluestore_compression_max_blob_size_hdd</span><br></pre></td></tr></table></figure><h4 id="4-8-PG相关"><a href="#4-8-PG相关" class="headerlink" title="4.8 PG相关"></a>4.8 PG相关</h4><p>1、查看pg组映射信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ceph pg dump  # 或 ceph pg ls</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>2、查看pg信息的脚本，第一个行为pool的id号</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">ceph pg dump | awk &#39;</span><br><span class="line">BEGIN &#123; IGNORECASE &#x3D; 1 &#125;</span><br><span class="line"> &#x2F;^PG_STAT&#x2F; &#123; col&#x3D;1; while($col!&#x3D;&quot;UP&quot;) &#123;col++&#125;; col++ &#125;</span><br><span class="line"> &#x2F;^[0-9a-f]+\.[0-9a-f]+&#x2F; &#123; match($0,&#x2F;^[0-9a-f]+&#x2F;); pool&#x3D;substr($0, RSTART, RLENGTH); poollist[pool]&#x3D;0;</span><br><span class="line"> up&#x3D;$col; i&#x3D;0; RSTART&#x3D;0; RLENGTH&#x3D;0; delete osds; while(match(up,&#x2F;[0-9]+&#x2F;)&gt;0) &#123; osds[++i]&#x3D;substr(up,RSTART,RLENGTH); up &#x3D; substr(up, RSTART+RLENGTH) &#125;</span><br><span class="line"> for(i in osds) &#123;array[osds[i],pool]++; osdlist[osds[i]];&#125;</span><br><span class="line">&#125;</span><br><span class="line">END &#123;</span><br><span class="line"> printf(&quot;\n&quot;);</span><br><span class="line"> printf(&quot;pool :\t&quot;); for (i in poollist) printf(&quot;%s\t&quot;,i); printf(&quot;| SUM \n&quot;);</span><br><span class="line"> for (i in poollist) printf(&quot;--------&quot;); printf(&quot;----------------\n&quot;);</span><br><span class="line"> for (i in osdlist) &#123; printf(&quot;osd.%i\t&quot;, i); sum&#x3D;0;</span><br><span class="line">   for (j in poollist) &#123; printf(&quot;%i\t&quot;, array[i,j]); sum+&#x3D;array[i,j]; sumpool[j]+&#x3D;array[i,j] &#125;; printf(&quot;| %i\n&quot;,sum) &#125;</span><br><span class="line"> for (i in poollist) printf(&quot;--------&quot;); printf(&quot;----------------\n&quot;);</span><br><span class="line"> printf(&quot;SUM :\t&quot;); for (i in poollist) printf(&quot;%s\t&quot;,sumpool[i]); printf(&quot;|\n&quot;);</span><br><span class="line">&#125;&#39;</span><br></pre></td></tr></table></figure><p>3、查看pg状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg stat</span><br></pre></td></tr></table></figure><p>4、查看一个pg的map</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg map 1.7b</span><br></pre></td></tr></table></figure><p>5、查询一个pg的详细信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg 1.7b query</span><br></pre></td></tr></table></figure><p>6、清理一个pg组</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg scrub 1.7b</span><br></pre></td></tr></table></figure><p>7、查看pg中stuck(卡住)的状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">ceph pg dump_stuck unclean</span><br><span class="line">ceph pg dump_stuck inactive</span><br><span class="line">ceph pg dump_stuck stale</span><br></pre></td></tr></table></figure><ul><li>Unclean (不干净)<br>归置组含有复制数未达到期望数量的对象，它们应该在恢复中。</li><li>Inactive (不活跃) 归置组不能处理读写，因为它们在等待一个有最新数据的 OSD 复活且进入集群。</li><li>Stale (不新鲜)<br>归置组处于未知状态，即存储它们的 OSD 有段时间没向监视器报告了(由 mon_osd_report_timeout 配置)。 阀值定义的是，归置组被认为卡住前等待的最小时间(默认 300 秒)</li></ul><p>8、显示一个集群中的所有的 pg 统计</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg dump --format plain  # 可用格式有 plain (默认)和 json 。</span><br></pre></td></tr></table></figure><p>9、查看某个 PG 内分布的数据状态，具体状态可以使用选项过滤输出</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg ls 17 clean  # 17为pg的编号</span><br></pre></td></tr></table></figure><p>10、查询 osd 包含 pg 的信息，过滤输出 pg 的状态信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg ls-by-osd osd.5</span><br></pre></td></tr></table></figure><p>11、查询 pool 包含 pg 的信息，过滤输出 pg 的状态信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg ls-by-pool egon_test</span><br></pre></td></tr></table></figure><p>12、查询某个 osd 状态为 primary pg ，可以根据需要过滤状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg ls-by-primary osd.3 clean</span><br></pre></td></tr></table></figure><p>13、恢复一个丢失的pg<br>如果集群丢了一个或多个对象，而且必须放弃搜索这些数据，你就要把未找到的对象标记为丢失( lost )。 如果所有可能的位置都查询过了，而仍找不到这些对象，你也许得放弃它们了。这可能是罕见的失败组合导致的， 集群在写入完成前，未能得知写入是否已执行。<br>当前只支持 revert 选项，它使得回滚到对象的前一个版本(如果它是新对象)或完全忽略它。要把 unfound 对象 标记为 lost ，执行命令:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph pg &#123;pg-id&#125; mark_unfound_lost revert|delete</span><br></pre></td></tr></table></figure><h4 id="4-9-rados命令相关"><a href="#4-9-rados命令相关" class="headerlink" title="4.9 rados命令相关"></a>4.9 rados命令相关</h4><p>rados 是和 Ceph 的对象存储集群(RADOS)，Ceph 的分布式文件系统的一部分进行交互是一种实用工具。<br>1、看 ceph 集群中有多少个 pool (只是查看 pool)</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados lspools  # 同 ceph osd pool ls 输出结果一致</span><br></pre></td></tr></table></figure><p>2、显示整个系统和被池毁掉的使用率统计，包括磁盘使用(字节)和对象计数</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados df</span><br></pre></td></tr></table></figure><p>3、创建一个 pool</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">rados mkpool test</span><br><span class="line"></span><br><span class="line">ceph osd pool set test crush_rule egon_rule  # 修改crush_rule为egon_rule</span><br></pre></td></tr></table></figure><p>4、创建一个对象</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados create test-object -p test  # 创建时卡住了，看看新建的存储池的crush_rule是否正确</span><br></pre></td></tr></table></figure><p>5、上传一个对象</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p test put xxx &#x2F;tmp&#x2F;egon.txt </span><br></pre></td></tr></table></figure><p>6、查看 ceph pool 中的 ceph object (这里的 object 是以块形式存储的)</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados ls -p test</span><br></pre></td></tr></table></figure><p>7、删除一个对象</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados rm test-object -p test</span><br></pre></td></tr></table></figure><p>8 、删除存储池以及它包含的所有数据</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados rmpool test test --yes-i-really-really-mean-it</span><br></pre></td></tr></table></figure><p>9、为存储池创建快照</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p test mksnap testsnap</span><br></pre></td></tr></table></figure><p>10、列出给定池的快照</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p test lssnap</span><br></pre></td></tr></table></figure><p>11、删除快照</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados -p test rmsnap testsnap</span><br></pre></td></tr></table></figure><p>12、使用 rados 进行性能测试！！！！！！！！！！！！！！！！！！！</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rados bench 600 write rand -t 100 -b 4K -p egon_test</span><br></pre></td></tr></table></figure><p>选项解释:</p><ul><li>测试时间 :600</li><li>支持测试类型:write/read ，加 rand 就是随机,不加就是顺序</li><li>并发数( -t 选项):100</li><li>pool 的名字是:egon_test</li></ul><h3 id="五-osd相关之osd故障模拟与恢复"><a href="#五-osd相关之osd故障模拟与恢复" class="headerlink" title="五 osd相关之osd故障模拟与恢复"></a>五 osd相关之osd故障模拟与恢复</h3><h5 id="5-1-模拟盘坏掉"><a href="#5-1-模拟盘坏掉" class="headerlink" title="5.1 模拟盘坏掉"></a>5.1 模拟盘坏掉</h5><p>如果ceph集群有上千个osd daemon，每天坏个2-3块盘太正常了，我们可以模拟down 掉一个 osd 硬盘</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 如果osd daemon正常运行，down的osd会很快自恢复正常,所以需要先关闭守护进程</span><br><span class="line">ssh root@osd01 systemctl stop ceph-osd@0  </span><br><span class="line">ceph osd down 0  </span><br></pre></td></tr></table></figure><p>5.2 将坏盘踢出集群<br>集群中坏掉一块盘后，我们需要将其踢出集群让集群恢复到active+clean状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;方法一&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line"># 1、关闭守护进程</span><br><span class="line">ssh root@osd01 systemctl stop ceph-osd@0  # 一定要到具体的节点上关闭</span><br><span class="line"></span><br><span class="line"># 2、down掉osd</span><br><span class="line">ceph osd down 0</span><br><span class="line"></span><br><span class="line"># 3、将osd.0移出集群，集群会自动同步数据</span><br><span class="line">ceph osd out osd.0</span><br><span class="line"></span><br><span class="line"># 4、将osd.0移除crushmap</span><br><span class="line">ceph osd crush remove osd.0  </span><br><span class="line"></span><br><span class="line"># 5、删除守护进程对应的账户信息</span><br><span class="line">ceph auth rm osd.0  </span><br><span class="line"></span><br><span class="line"># 6、删掉osd.0</span><br><span class="line">ceph osd rm osd.0</span><br><span class="line"></span><br><span class="line">&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;方法二&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;&#x3D;</span><br><span class="line">ssh root@osd02 systemctl stop ceph-osd@3  # 一定要到具体的节点上关闭</span><br><span class="line">ceph osd out osd.3</span><br><span class="line">ceph osd purge osd.3 --yes-i-really-mean-it  # 综合这一步，就可以完成操作</span><br><span class="line"># 删除配置文件中针对该osd的配置</span><br></pre></td></tr></table></figure><h5 id="5-3-把原来坏掉的osd修复后重新加入集群"><a href="#5-3-把原来坏掉的osd修复后重新加入集群" class="headerlink" title="5.3 把原来坏掉的osd修复后重新加入集群"></a>5.3 把原来坏掉的osd修复后重新加入集群</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line"># 远程连接到osd01节点</span><br><span class="line">ssh root@osd01</span><br><span class="line"></span><br><span class="line"># 切换到工作目录下</span><br><span class="line">cd &#x2F;etc&#x2F;ceph</span><br><span class="line"></span><br><span class="line"># 创建osd，无需指定名，会按序号自动生成</span><br><span class="line">ceph osd create  </span><br><span class="line"></span><br><span class="line"># 创建账户,切记账号与文件夹对应！！！</span><br><span class="line">ceph-authtool --create-keyring &#x2F;etc&#x2F;ceph&#x2F;ceph.osd.0.keyring --gen-key -n osd.0 --cap mon &#39;allow profile osd&#39; --cap mgr &#39;allow profile osd&#39; --cap osd &#39;allow *&#39;</span><br><span class="line"></span><br><span class="line"># 导入新的账户秘钥，切记账号与文件夹对应！！！</span><br><span class="line">ceph auth import -i &#x2F;etc&#x2F;ceph&#x2F;ceph.osd.0.keyring </span><br><span class="line">ceph auth get-or-create osd.0 -o &#x2F;var&#x2F;lib&#x2F;ceph&#x2F;osd&#x2F;ceph-0&#x2F;keyring</span><br><span class="line"></span><br><span class="line"># 加入集群</span><br><span class="line">ceph osd crush add osd.0 0.01900 host&#x3D;osd01</span><br><span class="line">ceph osd in osd.0</span><br><span class="line"></span><br><span class="line"># 重启osd守护进程</span><br><span class="line">systemctl restart ceph-osd@0</span><br></pre></td></tr></table></figure><blockquote><p>ps：如果重启失败</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">报错：</span><br><span class="line">Job for ceph-osd@3.service failed because start of the service was attempted too often. See &quot;systemctl status ceph-osd@3.service&quot; and &quot;journalctl -xe&quot; for details.</span><br><span class="line">To force a start use &quot;systemctl reset-failed ceph-osd@3.service&quot; followed by &quot;systemctl start ceph-osd@3.service&quot; again.</span><br><span class="line"></span><br><span class="line"># 先运行</span><br><span class="line">systemctl reset-failed ceph-osd@3.service systemctl start ceph-osd@3.service</span><br><span class="line"></span><br><span class="line"># 再重新开启</span><br><span class="line">systemctl start ceph-osd@3</span><br></pre></td></tr></table></figure><h3 id="六-在物理节点上新增osd-daemon"><a href="#六-在物理节点上新增osd-daemon" class="headerlink" title="六 在物理节点上新增osd daemon"></a>六 在物理节点上新增osd daemon</h3><p>在osd01节点上添加新的osd daemon</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"># 在osd01节点运行下述命令，把固态盘分&#x2F;dev&#x2F;sdi成两个分区，分别用作数据盘&#x2F;dev&#x2F;sdh的--block-db和--block-wal</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdi mklabel gpt</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdi mkpart primary 0% 50%</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdi mkpart primary 51% 100%</span><br><span class="line"></span><br><span class="line"># 在管理节点运行</span><br><span class="line">cd &#x2F;etc&#x2F;ceph</span><br><span class="line">ceph-deploy --overwrite-conf osd create osd01 --data &#x2F;dev&#x2F;sdh --block-db &#x2F;dev&#x2F;sdi1 --block-wal &#x2F;dev&#x2F;sdi2</span><br><span class="line"></span><br><span class="line"># 在管理节点运行,注意，如果crush map的设置不对，那么集群会出现unknown状态</span><br><span class="line">ceph osd crush add osd.9 0.01900 host&#x3D;osd01</span><br></pre></td></tr></table></figure><blockquote><p> 如果是在其他节点，例如mon03节点上添加osd daemon<br>！！！切记切记切记切记切记切记要为mon03节点添加一个cluster network！！！</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"># 在mon03节点运行下述命令，把固态盘分&#x2F;dev&#x2F;sdc成两个分区，分别用作数据盘&#x2F;dev&#x2F;sdb的--block-db和--block-wal</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdc mklabel gpt</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdc mkpart primary 0% 50%</span><br><span class="line">parted -s &#x2F;dev&#x2F;sdc mkpart primary 51% 100%</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 在管理节点运行</span><br><span class="line">cd &#x2F;etc&#x2F;ceph</span><br><span class="line">ceph-deploy --overwrite-conf osd create mon03 --data &#x2F;dev&#x2F;sdb --block-db &#x2F;dev&#x2F;sdc1 --block-wal &#x2F;dev&#x2F;sdc2</span><br><span class="line"></span><br><span class="line"># 在管理节点运行</span><br><span class="line">ceph osd crush add-bucket mon03 host</span><br><span class="line">ceph osd crush add osd.10 0.01900 host&#x3D;mon03</span><br><span class="line">ceph osd crush move mon03 rack&#x3D;rack1</span><br><span class="line">ceph osd in osd.10</span><br></pre></td></tr></table></figure><p>ps: 如果报错，磁盘发现gp信息</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">那么先清理磁盘</span><br><span class="line">ceph-disk zap &#x2F;dev&#x2F;sdb  # dd if&#x3D;&#x2F;dev&#x2F;zero of&#x3D;&#x2F;dev&#x2F;sdb bs&#x3D;512 count&#x3D;1</span><br><span class="line">ceph-disk zap &#x2F;dev&#x2F;sdc</span><br><span class="line">然后重新执行上述步骤</span><br></pre></td></tr></table></figure><blockquote><p>注意</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">在OSD添加或移除时，Ceph会重平衡PG。数据回填和恢复操作可能会产生大量的后端流量，影响集群性能。为避免性能降低，可对回填&#x2F;恢复操作进行配置：</span><br><span class="line"></span><br><span class="line">osd_recovery_op_priority # 值为1-63，默认为10，相对于客户端操作，恢复操作的优先级，默认客户端操作的优先级为63，参数为osd_client_op_priority</span><br><span class="line"></span><br><span class="line">osd_recovery_max_active # 每个osd一次处理的活跃恢复请求数量，默认为15，增大此值可加速恢复，但会增加集群负载</span><br><span class="line"></span><br><span class="line">osd_recovery_threads # 用于数据恢复时的线程数，默认为1</span><br><span class="line"></span><br><span class="line">osd_max_backfills # 单个osd的最大回填操作数，默认为10</span><br><span class="line"></span><br><span class="line">osd_backfill_scan_min # 回填操作时最小扫描对象数量，默认为64</span><br><span class="line"></span><br><span class="line">osd_backfill_scan_max # 回填操作的最大扫描对象数量，默认为512</span><br><span class="line"></span><br><span class="line">osd_backfill_full_ratio # osd的占满率达到多少时，拒绝接受回填请求，默认为0.85</span><br><span class="line"></span><br><span class="line">osd_backfill_retry_interval # 回填重试的时间间隔</span><br></pre></td></tr></table></figure><h3 id="七-osd节点关机维护"><a href="#七-osd节点关机维护" class="headerlink" title="七 osd节点关机维护"></a>七 osd节点关机维护</h3><p>你可能需要定期对集群中某个子网进行例行维护，或者要解决某个域内的问题。当你停止OSD时，默认情况下CRUSH机制会对集群自动重平衡，可将集群设为noout状态来关闭自动重平衡：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"># 1、关闭自动重平衡</span><br><span class="line">ceph osd set noout</span><br><span class="line"></span><br><span class="line"># 2、关闭节点上的osd进程</span><br><span class="line">ceph osd down 编号 # 分别把该节点上的osd设置为down状态</span><br><span class="line">systemctl stop ceph-osd.target   # stop该节点上的所有osd进程</span><br><span class="line"></span><br><span class="line"># 3、关闭节点</span><br><span class="line">shutdown -h now</span><br><span class="line"></span><br><span class="line"># 4、开始维护</span><br><span class="line">当你对失败域中OSD维护时，其中的PG将会变为degraded状态。</span><br><span class="line"></span><br><span class="line"># 5、维护完成启动守护进程</span><br><span class="line">systemctl start ceph-osd.target</span><br><span class="line"></span><br><span class="line"># 6、最后务必记得取消集群的noout状态</span><br><span class="line">ceph osd unset noout</span><br></pre></td></tr></table></figure><h3 id="八-升级ceph软件版本"><a href="#八-升级ceph软件版本" class="headerlink" title="八 升级ceph软件版本"></a>八 升级ceph软件版本</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">在MON和OSD机器上升级安装指定的ceph版本的软件包</span><br><span class="line">逐个重启MON进程</span><br><span class="line">设置noout 避免在异常情况下触发集群数据重新平衡</span><br><span class="line">ceph osd set noout</span><br><span class="line">逐个重启OSD进程</span><br><span class="line">ceph osd down &#123;osd-number&#125; #提前mark down， 减少slow request</span><br><span class="line">systemctl restart ceph-osd@&#123;osd-number&#125; #用systemctl重启OSD进程</span><br><span class="line">恢复noout 设置</span><br><span class="line">ceph osd unset noout</span><br></pre></td></tr></table></figure><h3 id="九-扩容"><a href="#九-扩容" class="headerlink" title="九 扩容"></a>九 扩容</h3><p>如果副本数为2，PB级的集群的容量超过50%，就要考虑扩容了。 假如OSD主机的磁盘容量为48TB（12*4TB），则需要backfill的数据为24TB（48TB 50%） ，假设网卡为10Gb，则新加一个OSD时，集群大约需要19200s（24TB/(10Gb/8)） 约3小时完成backfill，而backfill后台数据填充将会涉及大量的IO读和网络传输，必将影响生产业务运行。 如果集群容量到80%再扩容会导致更长的backfill时间，近8个小时。</p><p>OSD对应的磁盘利用率如果超过50%，也需要尽快扩容。</p><p>在业务闲时扩容</p><h3 id="十-Ceph-monitor故障恢复"><a href="#十-Ceph-monitor故障恢复" class="headerlink" title="十 Ceph monitor故障恢复"></a>十 Ceph monitor故障恢复</h3><p>1 问题</p><p>一般来说，在实际运行中，ceph monitor的个数是2n+1(n&gt;=0)个，在线上至少3个，只要正常的节点数&gt;=n+1，ceph的paxos算法能保证系统的正常运行。所以，对于3个节点，同时只能挂掉一个。一般来说，同时挂掉2个节点的概率比较小，但是万一挂掉2个呢？<br>如果ceph的monitor节点超过半数挂掉，paxos算法就无法正常进行仲裁(quorum)，此时，ceph集群会阻塞对集群的操作，直到超过半数的monitor节点恢复。</p><p>If there are not enough monitors to form a quorum, the ceph command will block trying to reach the cluster. In this situation, you need to get enough ceph-mon daemons running to form a quorum before doing anything else with the cluster.</p><p>所以，</p><p>（1）如果挂掉的2个节点至少有一个可以恢复，也就是monitor的元数据还是OK的，那么只需要重启ceph-mon进程即可。所以，对于monitor，最好运行在RAID的机器上。这样，即使机器出现故障，恢复也比较容易。</p><p>（2）如果挂掉的2个节点的元数据都损坏了呢？出现这种情况，说明人品不行，2台机器的RAID磁盘同时损坏，这得多背？肯定是管理员嫌工资太低，把机器砸了。如何恢复呢？</p><p>详见：<a href="https://www.cnblogs.com/linhaifeng/articles/14761126.html">https://www.cnblogs.com/linhaifeng/articles/14761126.html</a></p><h3 id="十一-Cephfs快照"><a href="#十一-Cephfs快照" class="headerlink" title="十一 Cephfs快照"></a>十一 Cephfs快照</h3><p>Cephfs的快照功能在官网都很少提及，因为即使开发了很多年，但是由于cephfs的复杂性，功能一直没能达到稳定，这里，只是介绍一下这个功能，怎么使用，并且建议不要在生产中使用，因为搞不好是会丢数据的</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">1、使能cephfs可以做快照：</span><br><span class="line">ceph fs set ceph allow_new_snaps 1</span><br><span class="line"></span><br><span class="line">2.在要做快照的目录下执行：</span><br><span class="line">mkdir .snap&#x2F;&#123;snapname&#125;</span><br><span class="line"> </span><br><span class="line">3、查看快照中的内容：</span><br><span class="line">ls .snap&#x2F;&#123;snapname&#125;</span><br><span class="line"> </span><br><span class="line">4、恢复：</span><br><span class="line"> </span><br><span class="line">cp -R .snap&#x2F;&#123;snapname&#125;&#x2F;* .&#x2F;</span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;h3 id=&quot;一-统一节点上ceph-conf文件&quot;&gt;&lt;a href=&quot;#一-统一节点上ceph-conf文件&quot; class=&quot;headerlink&quot; title=&quot;一 统一节点上ceph.conf文件&quot;&gt;&lt;/a&gt;一 统一节点上ceph.conf文件&lt;/h3&gt;&lt;p&gt;如果是在ad</summary>
      
    
    
    
    <category term="ceph" scheme="https://imszz.com/categories/ceph/"/>
    
    
    <category term="ceph" scheme="https://imszz.com/tags/ceph/"/>
    
  </entry>
  
  <entry>
    <title>centos7搭建ceph集群</title>
    <link href="https://imszz.com/p/877f6188/"/>
    <id>https://imszz.com/p/877f6188/</id>
    <published>2021-12-28T12:46:25.000Z</published>
    <updated>2021-12-28T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="一、服务器规划"><a href="#一、服务器规划" class="headerlink" title="一、服务器规划"></a>一、服务器规划</h2><table><thead><tr><th>主机名</th><th>主机IP</th><th>磁盘</th><th>角色</th></tr></thead><tbody><tr><td>node3</td><td>public-ip：172.18.112.20 <br> cluster-ip:  172.18.112.20</td><td>vdb</td><td>ceph-deploy,monitor,mgr,osd</td></tr><tr><td>node4</td><td>public-ip：172.18.112.19 <br> cluster-ip:  172.18.112.19</td><td>vdb</td><td>monitor,mgr,osd</td></tr><tr><td>node5</td><td>public-ip：172.18.112.18 <br> cluster-ip:  172.18.112.18</td><td>vdb</td><td>monitor,mgr,osd</td></tr></tbody></table><h2 id="二、设置主机名"><a href="#二、设置主机名" class="headerlink" title="二、设置主机名"></a>二、设置主机名</h2><p>主机名设置，三台主机分别执行属于自己的命令<br>node3</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]# hostnamectl set-hostname nod3</span><br><span class="line">[root@localhost ~]# hostname node3</span><br></pre></td></tr></table></figure><p>node4</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]# hostnamectl set-hostname node4</span><br><span class="line">[root@localhost ~]# hostname node4</span><br><span class="line"> </span><br></pre></td></tr></table></figure><p>node5</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]# hostnamectl set-hostname node5</span><br><span class="line">[root@localhost ~]# hostname node5</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>执行完毕后要想看到效果，需要关闭当前命令行窗口，重新打开即可看到设置效果</p><h2 id="三、设置hosts文件"><a href="#三、设置hosts文件" class="headerlink" title="三、设置hosts文件"></a>三、设置hosts文件</h2><p>在3台机器上都执行下面命令，添加映射</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">echo &quot;172.18.112.20 node3 &quot; &gt;&gt; &#x2F;etc&#x2F;hosts</span><br><span class="line">echo &quot;172.18.112.19 node4 &quot; &gt;&gt; &#x2F;etc&#x2F;hosts</span><br><span class="line">echo &quot;172.18.112.18 node5 &quot; &gt;&gt; &#x2F;etc&#x2F;hosts</span><br></pre></td></tr></table></figure><h2 id="四、创建用户并设置免密登录"><a href="#四、创建用户并设置免密登录" class="headerlink" title="四、创建用户并设置免密登录"></a>四、创建用户并设置免密登录</h2><p>创建用户（三台机器上都运行）</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">useradd -d &#x2F;home&#x2F;admin -m admin</span><br><span class="line">echo &quot;123456&quot; | passwd admin --stdin </span><br><span class="line">#sudo权限</span><br><span class="line">echo &quot;admin ALL &#x3D; (root) NOPASSWD:ALL&quot; | sudo tee &#x2F;etc&#x2F;sudoers.d&#x2F;admin</span><br><span class="line">sudo chmod 0440 &#x2F;etc&#x2F;sudoers.d&#x2F;admin</span><br></pre></td></tr></table></figure><p>设置免密登录  （只在node3上执行）</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]# su - admin</span><br><span class="line">[admin@node3 ~]$ ssh-keygen</span><br><span class="line">Generating public&#x2F;private rsa key pair.</span><br><span class="line">Enter file in which to save the key (&#x2F;home&#x2F;admin&#x2F;.ssh&#x2F;id_rsa):</span><br><span class="line">Created directory &#39;&#x2F;home&#x2F;admin&#x2F;.ssh&#39;.</span><br><span class="line">Enter passphrase (empty for no passphrase):</span><br><span class="line">Enter same passphrase again:</span><br><span class="line">Your identification has been saved in &#x2F;home&#x2F;admin&#x2F;.ssh&#x2F;id_rsa.</span><br><span class="line">Your public key has been saved in &#x2F;home&#x2F;admin&#x2F;.ssh&#x2F;id_rsa.pub.</span><br><span class="line">The key fingerprint is:</span><br><span class="line">SHA256:qfWhuboKeoHQOOMLOIB5tjK1RPjgw&#x2F;Csl4r6A1FiJYA admin@admin.ops5.bbdops.com</span><br><span class="line">The key&#39;s randomart image is:</span><br><span class="line">+---[RSA 2048]----+</span><br><span class="line">|+o..             |</span><br><span class="line">|E.+              |</span><br><span class="line">|*%               |</span><br><span class="line">|X+X      .       |</span><br><span class="line">|&#x3D;@.+    S .      |</span><br><span class="line">|X.*    o + .     |</span><br><span class="line">|oBo.  . o .      |</span><br><span class="line">|ooo.     .       |</span><br><span class="line">|+o....oo.        |</span><br><span class="line">+----[SHA256]-----+</span><br><span class="line">[admin@node3 ~]$ ssh-copy-id admin@node3</span><br><span class="line">[admin@node3 ~]$ ssh-copy-id admin@node4</span><br><span class="line">[admin@node3 ~]$ ssh-copy-id admin@node5</span><br></pre></td></tr></table></figure><hr><p>注意: 没有<code>ssh-copy-id</code> 这个命令可以手动把公钥传到对应的机器上去</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">cat ~&#x2F;.ssh&#x2F;id_*.pub | ssh  admin@host3 &#39;cat &gt;&gt; .ssh&#x2F;authorized_keys&#39;</span><br></pre></td></tr></table></figure><h2 id="五、配置时间同步"><a href="#五、配置时间同步" class="headerlink" title="五、配置时间同步"></a>五、配置时间同步</h2><p>三台都执行</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]$ timedatectl #查看本地时间</span><br><span class="line"></span><br><span class="line">[root@node3 ~]$ timedatectl set-timezone Asia&#x2F;Shanghai #改为亚洲上海时间</span><br><span class="line"></span><br><span class="line">[root@node3 ~]$ yum install -y chrony #同步工具</span><br><span class="line"></span><br><span class="line">[root@node3 ~]$ chronyc -n  sources -v #同步列表</span><br><span class="line"></span><br><span class="line">[root@node3 ~]$ chronyc tracking  #同步服务状态</span><br><span class="line"></span><br><span class="line">[root@node3 ~]$ timedatectl status #查看本地时间</span><br></pre></td></tr></table></figure><h2 id="六、安装ceph-deploy并安装ceph软件包"><a href="#六、安装ceph-deploy并安装ceph软件包" class="headerlink" title="六、安装ceph-deploy并安装ceph软件包"></a>六、安装ceph-deploy并安装ceph软件包</h2><p>配置ceph清华源</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; &#x2F;etc&#x2F;yum.repos.d&#x2F;ceph.repo&lt;&lt;&#39;EOF&#39;</span><br><span class="line">[Ceph]</span><br><span class="line">name&#x3D;Ceph packages for $basearch</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;$basearch</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">[Ceph-noarch]</span><br><span class="line">name&#x3D;Ceph noarch packages</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;noarch</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">[ceph-source]</span><br><span class="line">name&#x3D;Ceph source packages</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;SRPMS</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><p>安装ceph-deploy</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 ~]# sudo yum install ceph-deploy</span><br></pre></td></tr></table></figure><p>初始化mon点</p><p>ceph需要epel源的包，所以安装的节点都需要<code>yum install epel-release</code></p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 ~]$ mkdir my-cluster</span><br><span class="line">[admin@node3 ~]$ cd my-cluster</span><br><span class="line"># new</span><br><span class="line">[admin@node3 my-cluster]$ ceph-deploy new node3 node4 node5</span><br><span class="line">Traceback (most recent call last):</span><br><span class="line">  File &quot;&#x2F;bin&#x2F;ceph-deploy&quot;, line 18, in &lt;module&gt;</span><br><span class="line">    from ceph_deploy.cli import main</span><br><span class="line">  File &quot;&#x2F;usr&#x2F;lib&#x2F;python2.7&#x2F;site-packages&#x2F;ceph_deploy&#x2F;cli.py&quot;, line 1, in &lt;module&gt;</span><br><span class="line">    import pkg_resources</span><br><span class="line">ImportError: No module named pkg_resources</span><br><span class="line">#以上出现报错，是因为没有pip，安装pip</span><br><span class="line">[admin@node3 my-cluster]$ sudo yum install epel-release</span><br><span class="line">[admin@node3 my-cluster]$ sudo yum install python-pip</span><br><span class="line">#重新初始化</span><br><span class="line">[admin@node3 my-cluster]$ ceph-deploy new node3 node4 node5</span><br><span class="line">[admin@node3 my-cluster]$ ls</span><br><span class="line">ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring</span><br><span class="line">[admin@node3 my-cluster]$ cat ceph.conf </span><br><span class="line">[global]</span><br><span class="line">fsid &#x3D; 3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">mon_initial_members &#x3D; node3, node4, node5</span><br><span class="line">mon_host &#x3D; 172.18.112.20,172.18.112.19,172.18.112.18</span><br><span class="line">auth_cluster_required &#x3D; cephx</span><br><span class="line">auth_service_required &#x3D; cephx</span><br><span class="line">auth_client_required &#x3D; cephx</span><br></pre></td></tr></table></figure><p>修改ceph.conf，添加如下配置</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line">public network &#x3D; 172.18.112.0&#x2F;24</span><br><span class="line">cluster network &#x3D; 172.18.112.0&#x2F;24</span><br><span class="line">osd pool default size       &#x3D; 3</span><br><span class="line">osd pool default min size   &#x3D; 2</span><br><span class="line">osd pool default pg num     &#x3D; 128</span><br><span class="line">osd pool default pgp num    &#x3D; 128</span><br><span class="line">osd pool default crush rule &#x3D; 0</span><br><span class="line">osd crush chooseleaf type   &#x3D; 1</span><br><span class="line">max open files              &#x3D; 131072</span><br><span class="line">ms bind ipv6                &#x3D; false</span><br><span class="line">[mon]</span><br><span class="line">mon clock drift allowed      &#x3D; 10</span><br><span class="line">mon clock drift warn backoff &#x3D; 30</span><br><span class="line">mon osd full ratio           &#x3D; .95</span><br><span class="line">mon osd nearfull ratio       &#x3D; .85</span><br><span class="line">mon osd down out interval    &#x3D; 600</span><br><span class="line">mon osd report timeout       &#x3D; 300</span><br><span class="line">mon allow pool delete      &#x3D; true</span><br><span class="line">[osd]</span><br><span class="line">osd recovery max active      &#x3D; 3    </span><br><span class="line">osd max backfills            &#x3D; 5</span><br><span class="line">osd max scrubs               &#x3D; 2</span><br><span class="line">osd mkfs type &#x3D; xfs</span><br><span class="line">osd mkfs options xfs &#x3D; -f -i size&#x3D;1024</span><br><span class="line">osd mount options xfs &#x3D; rw,noatime,inode64,logbsize&#x3D;256k,delaylog</span><br><span class="line">filestore max sync interval  &#x3D; 5</span><br><span class="line">osd op threads               &#x3D; 2</span><br></pre></td></tr></table></figure><p>安装Ceph软件到指定节点</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ ceph-deploy install --no-adjust-repos node3 node4 node5</span><br></pre></td></tr></table></figure><blockquote><p>–no-adjust-repos是直接使用本地源，不生成官方源.</p></blockquote><p>部署初始的monitors，并获得keys</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[admin@nod3 my-cluster]$ ceph-deploy mon create-initial</span><br></pre></td></tr></table></figure><p>做完这一步，在当前目录下就会看到有如下的keyrings：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ ls</span><br><span class="line">ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log</span><br><span class="line">ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring</span><br></pre></td></tr></table></figure><p>将配置文件和密钥复制到集群各节点</p><p>配置文件就是生成的ceph.conf，而密钥是ceph.client.admin.keyring，当使用ceph客户端连接至ceph集群时需要使用的密默认密钥，这里我们所有节点都要复制，命令如下。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ ceph-deploy admin node3 node4 node5</span><br></pre></td></tr></table></figure><h2 id="七、部署ceph-mgr"><a href="#七、部署ceph-mgr" class="headerlink" title="七、部署ceph-mgr"></a>七、部署ceph-mgr</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">#在L版本的&#96;Ceph&#96;中新增了&#96;manager daemon&#96;，如下命令部署一个&#96;Manager&#96;守护进程</span><br><span class="line">[admin@node3 my-cluster]$ ceph-deploy mgr create node3 </span><br></pre></td></tr></table></figure><h2 id="八、创建osd"><a href="#八、创建osd" class="headerlink" title="八、创建osd"></a>八、创建osd</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">#用法：ceph-deploy osd create –data &#123;device&#125; &#123;ceph-node&#125;</span><br><span class="line">ceph-deploy osd create --data &#x2F;dev&#x2F;vdb node3</span><br><span class="line">ceph-deploy osd create --data &#x2F;dev&#x2F;vdb node4</span><br><span class="line">ceph-deploy osd create --data &#x2F;dev&#x2F;vdb node5</span><br></pre></td></tr></table></figure><p>检查osd状态 </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ sudo ceph health</span><br><span class="line">HEALTH_OK</span><br><span class="line"></span><br><span class="line">[admin@node3 my-cluster]$ sudo ceph -s </span><br><span class="line">  cluster:</span><br><span class="line">    id:     3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">    health: HEALTH_OK</span><br><span class="line"> </span><br><span class="line">  services:</span><br><span class="line">    mon: 3 daemons, quorum node5,node4,node3</span><br><span class="line">    mgr: node3(active)</span><br><span class="line">    osd: 3 osds: 3 up, 3 in</span><br><span class="line"> </span><br><span class="line">  data:</span><br><span class="line">    pools:   0 pools, 0 pgs</span><br><span class="line">    objects: 0  objects, 0 MiB</span><br><span class="line">    usage:   3.2 GiB used, 597 GiB &#x2F; 600 GiB avail</span><br><span class="line">    pgs:     </span><br><span class="line"></span><br></pre></td></tr></table></figure><p>默认情况下ceph.client.admin.keyring文件的权限为600，属主和属组为root，如果在集群内节点使用cephadmin用户直接直接ceph命令，将会提示无法找到/etc/ceph/ceph.client.admin.keyring文件，因为权限不足。</p><p>如果使用sudo ceph不存在此问题，为方便直接使用ceph命令，可将权限设置为644。在集群节点上面node1 admin用户下执行下面命令。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ ceph -s</span><br><span class="line">2021-12-28 07:59:36.062 7f52d08e0700 -1 auth: unable to find a keyring on &#x2F;etc&#x2F;ceph&#x2F;ceph.client.admin.keyring,&#x2F;etc&#x2F;ceph&#x2F;ceph.keyring,&#x2F;etc&#x2F;ceph&#x2F;keyring,&#x2F;etc&#x2F;ceph&#x2F;keyring.bin,: (2) No such file or directory</span><br><span class="line">2021-12-28 07:59:36.062 7f52d08e0700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication</span><br><span class="line">[errno 2] error connecting to the cluster</span><br><span class="line">[admin@node3 my-cluster]$ sudo chmod 644 &#x2F;etc&#x2F;ceph&#x2F;ceph.client.admin.keyring </span><br><span class="line"></span><br><span class="line"></span><br><span class="line">[admin@node3 my-cluster]$ ceph -s </span><br><span class="line">  cluster:</span><br><span class="line">    id:     3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">    health: HEALTH_OK</span><br><span class="line"> </span><br><span class="line">  services:</span><br><span class="line">    mon: 3 daemons, quorum node5,node4,node3</span><br><span class="line">    mgr: node3(active)</span><br><span class="line">    osd: 3 osds: 3 up, 3 in</span><br><span class="line"> </span><br><span class="line">  data:</span><br><span class="line">    pools:   0 pools, 0 pgs</span><br><span class="line">    objects: 0  objects, 0 MiB</span><br><span class="line">    usage:   3.2 GiB used, 597 GiB &#x2F; 600 GiB avail</span><br><span class="line">    pgs: </span><br></pre></td></tr></table></figure><p>查看osds</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[admin@node3 my-cluster]$ sudo ceph osd tree </span><br><span class="line">ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF </span><br><span class="line">-1       0.58589 root default                           </span><br><span class="line">-3       0.19530     host node3                         </span><br><span class="line"> 3   hdd 0.19530         osd.3      up  1.00000 1.00000 </span><br><span class="line">-5       0.19530     host node4                         </span><br><span class="line"> 4   hdd 0.19530         osd.4      up  1.00000 1.00000 </span><br><span class="line">-7       0.19530     host node5                         </span><br><span class="line"> 5   hdd 0.19530         osd.5      up  1.00000 1.00000</span><br></pre></td></tr></table></figure><h2 id="九、开启MGR监控模块"><a href="#九、开启MGR监控模块" class="headerlink" title="九、开启MGR监控模块"></a>九、开启MGR监控模块</h2><p>方式一：命令操作</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ceph mgr module enable dashboard</span><br></pre></td></tr></table></figure><p>如果以上操作报错如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">Error ENOENT: all mgr daemons do not support module &#39;dashboard&#39;, pass --force to force enablement</span><br></pre></td></tr></table></figure><p>则因为没有安装<code>ceph-mgr-dashboard</code>，在mgr的节点上安装。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">yum install ceph-mgr-dashboard</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>方式二：配置文件</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"># 编辑ceph.conf文件</span><br><span class="line">vi ceph.conf</span><br><span class="line">[mon]</span><br><span class="line">mgr initial modules &#x3D; dashboard</span><br><span class="line">#推送配置</span><br><span class="line">[admin@node3 my-cluster]$ ceph-deploy --overwrite-conf config push node3 node4 node5 </span><br><span class="line">#重启mgr</span><br><span class="line"> sudo systemctl restart ceph-mgr@node3</span><br><span class="line"> </span><br></pre></td></tr></table></figure><p>web登录配置<br>默认情况下，仪表板的所有HTTP连接均使用SSL/TLS进行保护。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">#要快速启动并运行仪表板，可以使用以下内置命令生成并安装自签名证书:</span><br><span class="line">[root@node3 my-cluster]# ceph dashboard create-self-signed-cert</span><br><span class="line">Self-signed certificate created</span><br><span class="line"></span><br><span class="line">#创建具有管理员角色的用户:</span><br><span class="line">[root@node3 my-cluster]# ceph dashboard set-login-credentials admin admin</span><br><span class="line">Username and password updated</span><br></pre></td></tr></table></figure><p>#查看ceph-mgr服务:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 my-cluster]# ceph mgr services</span><br><span class="line">&#123;</span><br><span class="line">    &quot;dashboard&quot;: &quot;https:&#x2F;&#x2F;node3:8443&#x2F;&quot;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p>以上配置完成后，浏览器输入 <a href="https://node3:8443/">https://node3:8443</a>  输入用户名<code>admin</code>，密码<code>admin</code>登录即可查看</p><blockquote><p> 要本地hosts解析</p></blockquote><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/ceph.png" alt="ceph"></p>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;一、服务器规划&quot;&gt;&lt;a href=&quot;#一、服务器规划&quot; class=&quot;headerlink&quot; title=&quot;一、服务器规划&quot;&gt;&lt;/a&gt;一、服务器规划&lt;/h2&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;主机名&lt;/th&gt;
&lt;th&gt;主机IP&lt;/th&gt;
&lt;th&gt;磁</summary>
      
    
    
    
    <category term="ceph" scheme="https://imszz.com/categories/ceph/"/>
    
    
    <category term="ceph" scheme="https://imszz.com/tags/ceph/"/>
    
  </entry>
  
  <entry>
    <title>K8S使用ceph-csi持久化存储之RBD</title>
    <link href="https://imszz.com/p/4aa1a279/"/>
    <id>https://imszz.com/p/4aa1a279/</id>
    <published>2021-12-28T12:46:25.000Z</published>
    <updated>2022-01-09T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h3 id="创建一个ceph-pool-创建存储池"><a href="#创建一个ceph-pool-创建存储池" class="headerlink" title="创建一个ceph pool 创建存储池"></a>创建一个ceph pool 创建存储池</h3><div class="note success flat"><p>ceph集群请看这里：<a href="https://imszz.com/p/877f6188/">https://imszz.com/p/877f6188/</a></p></div><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">ceph osd pool create rbd 128</span><br><span class="line">ceph osd pool set-quota rbd max_bytes $((50 * 1024 * 1024 * 1024)) #50G的存储池</span><br><span class="line">rbd pool init rbd</span><br></pre></td></tr></table></figure><h4 id="查看集群状态"><a href="#查看集群状态" class="headerlink" title="查看集群状态"></a>查看集群状态</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]# ceph -s</span><br><span class="line">  cluster:</span><br><span class="line">    id:     3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">    health: HEALTH_OK</span><br><span class="line"> </span><br><span class="line">  services:</span><br><span class="line">    mon: 3 daemons, quorum node5,node4,node3</span><br><span class="line">    mgr: node3(active)</span><br><span class="line">    osd: 3 osds: 3 up, 3 in</span><br><span class="line"> </span><br><span class="line">  data:</span><br><span class="line">    pools:   1 pools, 128 pgs</span><br><span class="line">    objects: 23  objects, 22 MiB</span><br><span class="line">    usage:   7.4 GiB used, 593 GiB &#x2F; 600 GiB avail</span><br><span class="line">    pgs:     128 active+clean</span><br></pre></td></tr></table></figure><h4 id="查看用户key"><a href="#查看用户key" class="headerlink" title="查看用户key"></a>查看用户key</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]# ceph auth get client.admin</span><br><span class="line">exported keyring for client.admin</span><br><span class="line">[client.admin]</span><br><span class="line">key &#x3D; AQCJMslhQW0JEhAAXEgcsW3IZozDi7FF51+sbw&#x3D;&#x3D;</span><br><span class="line">caps mds &#x3D; &quot;allow *&quot;</span><br><span class="line">caps mgr &#x3D; &quot;allow *&quot;</span><br><span class="line">caps mon &#x3D; &quot;allow *&quot;</span><br><span class="line">caps osd &#x3D; &quot;allow *&quot;</span><br></pre></td></tr></table></figure><blockquote><p>或者自己创建存储池、用户以及用户key</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]# ceph osd pool create kubernetes</span><br><span class="line">[root@node3 ~]# rbd pool init kubernetes</span><br><span class="line">[root@node3 ~]# ceph auth get-or-create client.kubernetes mon &#39;profile rbd&#39; osd &#39;profile rbd pool&#x3D;kubernetes&#39; mgr &#39;profile rbd pool&#x3D;kubernetes&#39;</span><br><span class="line">[client.kubernetes]</span><br><span class="line">    key &#x3D; AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg&#x3D;&#x3D;</span><br></pre></td></tr></table></figure><p>注意：这里key后面对应的只是一个例子，实际配置中要以运行命令后产生的结果为准<br>这里的key使用user的key，后面配置中是需要用到的<br>如果是ceph luminous版本的集群，那么命令应该是<code>ceph auth get-or-create client.kubernetes mon &#39;allow r&#39; osd &#39;allow rwx pool=kubernetes&#39; -o ceph.client.kubernetes.keyring</code></p><h3 id="k8s所有节点安装ceph客户端"><a href="#k8s所有节点安装ceph客户端" class="headerlink" title="k8s所有节点安装ceph客户端"></a>k8s所有节点安装ceph客户端</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line">cat &gt; &#x2F;etc&#x2F;yum.repos.d&#x2F;ceph.repo&lt;&lt;&#39;EOF&#39;</span><br><span class="line">[Ceph]</span><br><span class="line">name&#x3D;Ceph packages for $basearch</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;$basearch</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">[Ceph-noarch]</span><br><span class="line">name&#x3D;Ceph noarch packages</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;noarch</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">[ceph-source]</span><br><span class="line">name&#x3D;Ceph source packages</span><br><span class="line">baseurl&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;rpm-mimic&#x2F;el7&#x2F;SRPMS</span><br><span class="line">enabled&#x3D;1</span><br><span class="line">gpgcheck&#x3D;1</span><br><span class="line">type&#x3D;rpm-md</span><br><span class="line">gpgkey&#x3D;https:&#x2F;&#x2F;mirror.tuna.tsinghua.edu.cn&#x2F;ceph&#x2F;keys&#x2F;release.asc</span><br><span class="line">priority&#x3D;1</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">yum -y install ceph</span><br></pre></td></tr></table></figure><h4 id="生成ceph-csi的kubernetes-configmap"><a href="#生成ceph-csi的kubernetes-configmap" class="headerlink" title="生成ceph-csi的kubernetes configmap"></a>生成ceph-csi的kubernetes configmap</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@node3 ~]# ceph mon dump</span><br><span class="line">dumped monmap epoch 1</span><br><span class="line">epoch 1</span><br><span class="line">fsid 3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">last_changed 2021-12-27 11:27:02.815248</span><br><span class="line">created 2021-12-27 11:27:02.815248</span><br><span class="line">0: 172.18.112.18:6789&#x2F;0 mon.node5</span><br><span class="line">1: 172.18.112.19:6789&#x2F;0 mon.node4</span><br><span class="line">2: 172.18.112.20:6789&#x2F;0 mon.node3</span><br></pre></td></tr></table></figure><h4 id="用以上的的信息生成configmap："><a href="#用以上的的信息生成configmap：" class="headerlink" title="用以上的的信息生成configmap："></a>用以上的的信息生成configmap：</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">cat csi-config-map.yaml</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: ConfigMap</span><br><span class="line">data:</span><br><span class="line">  config.json: |-</span><br><span class="line">    [</span><br><span class="line">      &#123;</span><br><span class="line">        &quot;clusterID&quot;: &quot;3a2a06c7-124f-4703-b798-88eb2950361e&quot;,</span><br><span class="line">        &quot;monitors&quot;: [</span><br><span class="line">          &quot;172.18.112.20:6789&quot;,</span><br><span class="line">          &quot;172.18.112.19:6789&quot;,</span><br><span class="line">          &quot;172.18.112.18:6789&quot;</span><br><span class="line">        ]</span><br><span class="line">      &#125;</span><br><span class="line">    ]</span><br><span class="line">metadata:</span><br><span class="line">  name: ceph-csi-config</span><br></pre></td></tr></table></figure><p>在kubernetes集群上，将此configmap存储到集群</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-config-map.yaml</span><br></pre></td></tr></table></figure><h3 id="生成ceph-csi-cephx的secret"><a href="#生成ceph-csi-cephx的secret" class="headerlink" title="生成ceph-csi cephx的secret"></a>生成ceph-csi cephx的secret</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">cat &lt;&lt;EOF &gt; csi-rbd-secret.yaml</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Secret</span><br><span class="line">metadata:</span><br><span class="line">name: csi-rbd-secret</span><br><span class="line">namespace: default</span><br><span class="line">stringData:</span><br><span class="line">    userID: admin</span><br><span class="line">    userKey: AQAs89depA23NRAA8yEg0GfHNC&#x2F;uhKU9jsgp6Q&#x3D;&#x3D;</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><p>将此配置存储到kubernetes中</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-rbd-secret.yaml</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="配置ceph-csi插件-kubernetes上的rbac和提供存储功能的容器"><a href="#配置ceph-csi插件-kubernetes上的rbac和提供存储功能的容器" class="headerlink" title="配置ceph-csi插件(kubernetes上的rbac和提供存储功能的容器)"></a>配置ceph-csi插件(kubernetes上的rbac和提供存储功能的容器)</h3><h4 id="rbac部分"><a href="#rbac部分" class="headerlink" title="rbac部分"></a>rbac部分</h4><p>可以通信github直接部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;ceph&#x2F;ceph-csi&#x2F;master&#x2F;deploy&#x2F;rbd&#x2F;kubernetes&#x2F;csi-provisioner-rbac.yaml</span><br></pre></td></tr></table></figure><h5 id="离线请按照以下配置"><a href="#离线请按照以下配置" class="headerlink" title="离线请按照以下配置"></a>离线请按照以下配置</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat csi-provisioner-rbac.yaml</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: ServiceAccount</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-provisioner</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: ClusterRole</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-external-provisioner-runner</span><br><span class="line">rules:</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;nodes&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;secrets&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;events&quot;]</span><br><span class="line">    verbs: [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;persistentvolumes&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;delete&quot;, &quot;patch&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;persistentvolumeclaims&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;update&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;persistentvolumeclaims&#x2F;status&quot;]</span><br><span class="line">    verbs: [&quot;update&quot;, &quot;patch&quot;]</span><br><span class="line">  - apiGroups: [&quot;storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;storageclasses&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;]</span><br><span class="line">  - apiGroups: [&quot;snapshot.storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumesnapshots&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;]</span><br><span class="line">  - apiGroups: [&quot;snapshot.storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumesnapshotcontents&quot;]</span><br><span class="line">    verbs: [&quot;create&quot;, &quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;delete&quot;]</span><br><span class="line">  - apiGroups: [&quot;snapshot.storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumesnapshotclasses&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;]</span><br><span class="line">  - apiGroups: [&quot;storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumeattachments&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;patch&quot;]</span><br><span class="line">  - apiGroups: [&quot;storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumeattachments&#x2F;status&quot;]</span><br><span class="line">    verbs: [&quot;patch&quot;]</span><br><span class="line">  - apiGroups: [&quot;storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;csinodes&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;]</span><br><span class="line">  - apiGroups: [&quot;snapshot.storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumesnapshotcontents&#x2F;status&quot;]</span><br><span class="line">    verbs: [&quot;update&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;configmaps&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;serviceaccounts&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">---</span><br><span class="line">kind: ClusterRoleBinding</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-provisioner-role</span><br><span class="line">subjects:</span><br><span class="line">  - kind: ServiceAccount</span><br><span class="line">    name: rbd-csi-provisioner</span><br><span class="line">    # replace with non-default namespace name</span><br><span class="line">    namespace: default</span><br><span class="line">roleRef:</span><br><span class="line">  kind: ClusterRole</span><br><span class="line">  name: rbd-external-provisioner-runner</span><br><span class="line">  apiGroup: rbac.authorization.k8s.io</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: Role</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">  name: rbd-external-provisioner-cfg</span><br><span class="line">rules:</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;configmaps&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;delete&quot;]</span><br><span class="line">  - apiGroups: [&quot;coordination.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;leases&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;, &quot;delete&quot;, &quot;update&quot;, &quot;create&quot;]</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: RoleBinding</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-provisioner-role-cfg</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">subjects:</span><br><span class="line">  - kind: ServiceAccount</span><br><span class="line">    name: rbd-csi-provisioner</span><br><span class="line">    # replace with non-default namespace name</span><br><span class="line">    namespace: default</span><br><span class="line">roleRef:</span><br><span class="line">  kind: Role</span><br><span class="line">  name: rbd-external-provisioner-cfg</span><br><span class="line">  apiGroup: rbac.authorization.k8s.io</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-provisioner-rbac.yaml</span><br></pre></td></tr></table></figure><p>可以通信github直接部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;ceph&#x2F;ceph-csi&#x2F;master&#x2F;deploy&#x2F;rbd&#x2F;kubernetes&#x2F;csi-nodeplugin-rbac.yaml</span><br></pre></td></tr></table></figure><h5 id="离线请按照以下配置-1"><a href="#离线请按照以下配置-1" class="headerlink" title="离线请按照以下配置"></a>离线请按照以下配置</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat csi-nodeplugin-rbac.yaml </span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: ServiceAccount</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-nodeplugin</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">---</span><br><span class="line">kind: ClusterRole</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-nodeplugin</span><br><span class="line">rules:</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;nodes&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  # allow to read Vault Token and connection options from the Tenants namespace</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;secrets&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;configmaps&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;serviceaccounts&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  - apiGroups: [&quot;&quot;]</span><br><span class="line">    resources: [&quot;persistentvolumes&quot;]</span><br><span class="line">    verbs: [&quot;get&quot;]</span><br><span class="line">  - apiGroups: [&quot;storage.k8s.io&quot;]</span><br><span class="line">    resources: [&quot;volumeattachments&quot;]</span><br><span class="line">    verbs: [&quot;list&quot;, &quot;get&quot;]</span><br><span class="line">---</span><br><span class="line">kind: ClusterRoleBinding</span><br><span class="line">apiVersion: rbac.authorization.k8s.io&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: rbd-csi-nodeplugin</span><br><span class="line">subjects:</span><br><span class="line">  - kind: ServiceAccount</span><br><span class="line">    name: rbd-csi-nodeplugin</span><br><span class="line">    # replace with non-default namespace name</span><br><span class="line">    namespace: default</span><br><span class="line">roleRef:</span><br><span class="line">  kind: ClusterRole</span><br><span class="line">  name: rbd-csi-nodeplugin</span><br><span class="line">  apiGroup: rbac.authorization.k8s.io</span><br></pre></td></tr></table></figure><p>部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-nodeplugin-rbac.yaml</span><br></pre></td></tr></table></figure><h4 id="provisioner部分"><a href="#provisioner部分" class="headerlink" title="provisioner部分"></a>provisioner部分</h4><p>包含镜像版本，要是用其他版本，请自行修改yaml文件：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">k8s.gcr.io&#x2F;sig-storage&#x2F;csi-resizer:v1.3.0</span><br><span class="line">k8s.gcr.io&#x2F;sig-storage&#x2F;csi-snapshotter:v4.2.0</span><br><span class="line">k8s.gcr.io&#x2F;sig-storage&#x2F;csi-provisioner:v3.0.0</span><br><span class="line">k8s.gcr.io&#x2F;sig-storage&#x2F;csi-node-driver-registrar:v2.3.0</span><br><span class="line">k8s.gcr.io&#x2F;sig-storage&#x2F;csi-attacher:v3.3.0</span><br><span class="line">quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br></pre></td></tr></table></figure><p>官方文件</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">wget https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;ceph&#x2F;ceph-csi&#x2F;master&#x2F;deploy&#x2F;rbd&#x2F;kubernetes&#x2F;csi-rbdplugin-provisioner.yaml</span><br><span class="line">wget https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;ceph&#x2F;ceph-csi&#x2F;master&#x2F;deploy&#x2F;rbd&#x2F;kubernetes&#x2F;csi-rbdplugin.yaml</span><br></pre></td></tr></table></figure><blockquote><p>以下yml文件所引用的镜像文件已经本地镜像仓库，请根据自己网络环境调整</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br><span class="line">200</span><br><span class="line">201</span><br><span class="line">202</span><br><span class="line">203</span><br><span class="line">204</span><br><span class="line">205</span><br><span class="line">206</span><br><span class="line">207</span><br><span class="line">208</span><br><span class="line">209</span><br><span class="line">210</span><br><span class="line">211</span><br><span class="line">212</span><br><span class="line">213</span><br><span class="line">214</span><br><span class="line">215</span><br><span class="line">216</span><br><span class="line">217</span><br><span class="line">218</span><br><span class="line">219</span><br><span class="line">220</span><br><span class="line">221</span><br><span class="line">222</span><br><span class="line">223</span><br><span class="line">224</span><br><span class="line">225</span><br><span class="line">226</span><br><span class="line">227</span><br><span class="line">228</span><br><span class="line">229</span><br><span class="line">230</span><br><span class="line">231</span><br><span class="line">232</span><br><span class="line">233</span><br><span class="line">234</span><br><span class="line">235</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat csi-rbdplugin-provisioner.yaml</span><br><span class="line">---</span><br><span class="line">kind: Service</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-rbdplugin-provisioner</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">  labels:</span><br><span class="line">    app: csi-metrics</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    app: csi-rbdplugin-provisioner</span><br><span class="line">  ports:</span><br><span class="line">    - name: http-metrics</span><br><span class="line">      port: 8080</span><br><span class="line">      protocol: TCP</span><br><span class="line">      targetPort: 8680</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">kind: Deployment</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-rbdplugin-provisioner</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  replicas: 3</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: csi-rbdplugin-provisioner</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: csi-rbdplugin-provisioner</span><br><span class="line">    spec:</span><br><span class="line">      affinity:</span><br><span class="line">        podAntiAffinity:</span><br><span class="line">          requiredDuringSchedulingIgnoredDuringExecution:</span><br><span class="line">            - labelSelector:</span><br><span class="line">                matchExpressions:</span><br><span class="line">                  - key: app</span><br><span class="line">                    operator: In</span><br><span class="line">                    values:</span><br><span class="line">                      - csi-rbdplugin-provisioner</span><br><span class="line">              topologyKey: &quot;kubernetes.io&#x2F;hostname&quot;</span><br><span class="line">      serviceAccountName: rbd-csi-provisioner</span><br><span class="line">      priorityClassName: system-cluster-critical</span><br><span class="line">      containers:</span><br><span class="line">        - name: csi-provisioner</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-provisioner:v3.0.0</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--csi-address&#x3D;$(ADDRESS)&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--timeout&#x3D;150s&quot;</span><br><span class="line">            - &quot;--retry-interval-start&#x3D;500ms&quot;</span><br><span class="line">            - &quot;--leader-election&#x3D;true&quot;</span><br><span class="line">            #  set it to true to use topology based provisioning</span><br><span class="line">            - &quot;--feature-gates&#x3D;Topology&#x3D;false&quot;</span><br><span class="line">            # if fstype is not specified in storageclass, ext4 is default</span><br><span class="line">            - &quot;--default-fstype&#x3D;ext4&quot;</span><br><span class="line">            - &quot;--extra-create-metadata&#x3D;true&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: ADDRESS</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">        - name: csi-snapshotter</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-snapshotter:v4.2.0</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--csi-address&#x3D;$(ADDRESS)&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--timeout&#x3D;150s&quot;</span><br><span class="line">            - &quot;--leader-election&#x3D;true&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: ADDRESS</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">        - name: csi-attacher</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-attacher:v3.3.0</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--csi-address&#x3D;$(ADDRESS)&quot;</span><br><span class="line">            - &quot;--leader-election&#x3D;true&quot;</span><br><span class="line">            - &quot;--retry-interval-start&#x3D;500ms&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: ADDRESS</span><br><span class="line">              value: &#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">        - name: csi-resizer</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-resizer:v1.3.0</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--csi-address&#x3D;$(ADDRESS)&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--timeout&#x3D;150s&quot;</span><br><span class="line">            - &quot;--leader-election&quot;</span><br><span class="line">            - &quot;--retry-interval-start&#x3D;500ms&quot;</span><br><span class="line">            - &quot;--handle-volume-inuse-error&#x3D;false&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: ADDRESS</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">        - name: csi-rbdplugin</span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--nodeid&#x3D;$(NODE_ID)&quot;</span><br><span class="line">            - &quot;--type&#x3D;rbd&quot;</span><br><span class="line">            - &quot;--controllerserver&#x3D;true&quot;</span><br><span class="line">            - &quot;--endpoint&#x3D;$(CSI_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--csi-addons-endpoint&#x3D;$(CSI_ADDONS_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--drivername&#x3D;rbd.csi.ceph.com&quot;</span><br><span class="line">            - &quot;--pidlimit&#x3D;-1&quot;</span><br><span class="line">            - &quot;--rbdhardmaxclonedepth&#x3D;8&quot;</span><br><span class="line">            - &quot;--rbdsoftmaxclonedepth&#x3D;4&quot;</span><br><span class="line">            - &quot;--enableprofiling&#x3D;false&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: POD_IP</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: status.podIP</span><br><span class="line">            - name: NODE_ID</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: spec.nodeName</span><br><span class="line">            - name: POD_NAMESPACE</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: metadata.namespace</span><br><span class="line">            # - name: KMS_CONFIGMAP_NAME</span><br><span class="line">            #   value: encryptionConfig</span><br><span class="line">            - name: CSI_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">            - name: CSI_ADDONS_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-addons.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">            - mountPath: &#x2F;dev</span><br><span class="line">              name: host-dev</span><br><span class="line">            - mountPath: &#x2F;sys</span><br><span class="line">              name: host-sys</span><br><span class="line">            - mountPath: &#x2F;lib&#x2F;modules</span><br><span class="line">              name: lib-modules</span><br><span class="line">              readOnly: true</span><br><span class="line">            - name: ceph-csi-config</span><br><span class="line">              mountPath: &#x2F;etc&#x2F;ceph-csi-config&#x2F;</span><br><span class="line">           # - name: ceph-csi-encryption-kms-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph-csi-encryption-kms-config&#x2F;</span><br><span class="line">            - name: keys-tmp-dir</span><br><span class="line">              mountPath: &#x2F;tmp&#x2F;csi&#x2F;keys</span><br><span class="line">           # - name: ceph-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph&#x2F;</span><br><span class="line">        - name: csi-rbdplugin-controller</span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--type&#x3D;controller&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--drivername&#x3D;rbd.csi.ceph.com&quot;</span><br><span class="line">            - &quot;--drivernamespace&#x3D;$(DRIVER_NAMESPACE)&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: DRIVER_NAMESPACE</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: metadata.namespace</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: ceph-csi-config</span><br><span class="line">              mountPath: &#x2F;etc&#x2F;ceph-csi-config&#x2F;</span><br><span class="line">            - name: keys-tmp-dir</span><br><span class="line">              mountPath: &#x2F;tmp&#x2F;csi&#x2F;keys</span><br><span class="line">           # - name: ceph-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph&#x2F;</span><br><span class="line">        - name: liveness-prometheus</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--type&#x3D;liveness&quot;</span><br><span class="line">            - &quot;--endpoint&#x3D;$(CSI_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--metricsport&#x3D;8680&quot;</span><br><span class="line">            - &quot;--metricspath&#x3D;&#x2F;metrics&quot;</span><br><span class="line">            - &quot;--polltime&#x3D;60s&quot;</span><br><span class="line">            - &quot;--timeout&#x3D;3s&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: CSI_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-provisioner.sock</span><br><span class="line">            - name: POD_IP</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: status.podIP</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">      volumes:</span><br><span class="line">        - name: host-dev</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;dev</span><br><span class="line">        - name: host-sys</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;sys</span><br><span class="line">        - name: lib-modules</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;lib&#x2F;modules</span><br><span class="line">        - name: socket-dir</span><br><span class="line">          emptyDir: &#123;</span><br><span class="line">            medium: &quot;Memory&quot;</span><br><span class="line">          &#125;</span><br><span class="line">        #- name: ceph-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-config</span><br><span class="line">        - name: ceph-csi-config</span><br><span class="line">          configMap:</span><br><span class="line">            name: ceph-csi-config</span><br><span class="line">        #- name: ceph-csi-encryption-kms-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-csi-encryption-kms-config</span><br><span class="line">        - name: keys-tmp-dir</span><br><span class="line">          emptyDir: &#123;</span><br><span class="line">            medium: &quot;Memory&quot;</span><br><span class="line">          &#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br><span class="line">200</span><br><span class="line">201</span><br><span class="line">202</span><br><span class="line">203</span><br><span class="line">204</span><br><span class="line">205</span><br><span class="line">206</span><br><span class="line">207</span><br><span class="line">208</span><br><span class="line">209</span><br><span class="line">210</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat csi-rbdplugin.yaml</span><br><span class="line">---</span><br><span class="line">kind: DaemonSet</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-rbdplugin</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: csi-rbdplugin</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: csi-rbdplugin</span><br><span class="line">    spec:</span><br><span class="line">      serviceAccountName: rbd-csi-nodeplugin</span><br><span class="line">      hostNetwork: true</span><br><span class="line">      hostPID: true</span><br><span class="line">      priorityClassName: system-node-critical</span><br><span class="line">      # to use e.g. Rook orchestrated cluster, and mons&#39; FQDN is</span><br><span class="line">      # resolved through k8s service, set dns policy to cluster first</span><br><span class="line">      dnsPolicy: ClusterFirstWithHostNet</span><br><span class="line">      containers:</span><br><span class="line">        - name: driver-registrar</span><br><span class="line">          # This is necessary only for systems with SELinux, where</span><br><span class="line">          # non-privileged sidecar containers cannot access unix domain socket</span><br><span class="line">          # created by privileged CSI driver container.</span><br><span class="line">          securityContext:</span><br><span class="line">            privileged: true</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-node-driver-registrar:v2.3.0</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--csi-address&#x3D;&#x2F;csi&#x2F;csi.sock&quot;</span><br><span class="line">            - &quot;--kubelet-registration-path&#x3D;&#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins&#x2F;rbd.csi.ceph.com&#x2F;csi.sock&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: KUBE_NODE_NAME</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: spec.nodeName</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">            - name: registration-dir</span><br><span class="line">              mountPath: &#x2F;registration</span><br><span class="line">        - name: csi-rbdplugin</span><br><span class="line">          securityContext:</span><br><span class="line">            privileged: true</span><br><span class="line">            capabilities:</span><br><span class="line">              add: [&quot;SYS_ADMIN&quot;]</span><br><span class="line">            allowPrivilegeEscalation: true</span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--nodeid&#x3D;$(NODE_ID)&quot;</span><br><span class="line">            - &quot;--pluginpath&#x3D;&#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins&quot;</span><br><span class="line">            - &quot;--stagingpath&#x3D;&#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins&#x2F;kubernetes.io&#x2F;csi&#x2F;pv&#x2F;&quot;</span><br><span class="line">            - &quot;--type&#x3D;rbd&quot;</span><br><span class="line">            - &quot;--nodeserver&#x3D;true&quot;</span><br><span class="line">            - &quot;--endpoint&#x3D;$(CSI_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--csi-addons-endpoint&#x3D;$(CSI_ADDONS_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--v&#x3D;5&quot;</span><br><span class="line">            - &quot;--drivername&#x3D;rbd.csi.ceph.com&quot;</span><br><span class="line">            - &quot;--enableprofiling&#x3D;false&quot;</span><br><span class="line">            # If topology based provisioning is desired, configure required</span><br><span class="line">            # node labels representing the nodes topology domain</span><br><span class="line">            # and pass the label names below, for CSI to consume and advertise</span><br><span class="line">            # its equivalent topology domain</span><br><span class="line">            # - &quot;--domainlabels&#x3D;failure-domain&#x2F;region,failure-domain&#x2F;zone&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: POD_IP</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: status.podIP</span><br><span class="line">            - name: NODE_ID</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: spec.nodeName</span><br><span class="line">            - name: POD_NAMESPACE</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: metadata.namespace</span><br><span class="line">            # - name: KMS_CONFIGMAP_NAME</span><br><span class="line">            #   value: encryptionConfig</span><br><span class="line">            - name: CSI_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi.sock</span><br><span class="line">            - name: CSI_ADDONS_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi-addons.sock</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">            - mountPath: &#x2F;dev</span><br><span class="line">              name: host-dev</span><br><span class="line">            - mountPath: &#x2F;sys</span><br><span class="line">              name: host-sys</span><br><span class="line">            - mountPath: &#x2F;run&#x2F;mount</span><br><span class="line">              name: host-mount</span><br><span class="line">            - mountPath: &#x2F;etc&#x2F;selinux</span><br><span class="line">              name: etc-selinux</span><br><span class="line">              readOnly: true</span><br><span class="line">            - mountPath: &#x2F;lib&#x2F;modules</span><br><span class="line">              name: lib-modules</span><br><span class="line">              readOnly: true</span><br><span class="line">            - name: ceph-csi-config</span><br><span class="line">              mountPath: &#x2F;etc&#x2F;ceph-csi-config&#x2F;</span><br><span class="line">            #- name: ceph-csi-encryption-kms-config</span><br><span class="line">            #  mountPath: &#x2F;etc&#x2F;ceph-csi-encryption-kms-config&#x2F;</span><br><span class="line">            - name: plugin-dir</span><br><span class="line">              mountPath: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins</span><br><span class="line">              mountPropagation: &quot;Bidirectional&quot;</span><br><span class="line">            - name: mountpoint-dir</span><br><span class="line">              mountPath: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;pods</span><br><span class="line">              mountPropagation: &quot;Bidirectional&quot;</span><br><span class="line">            - name: keys-tmp-dir</span><br><span class="line">              mountPath: &#x2F;tmp&#x2F;csi&#x2F;keys</span><br><span class="line">            - name: ceph-logdir</span><br><span class="line">              mountPath: &#x2F;var&#x2F;log&#x2F;ceph</span><br><span class="line">            #- name: ceph-config</span><br><span class="line">            #  mountPath: &#x2F;etc&#x2F;ceph&#x2F;</span><br><span class="line">        - name: liveness-prometheus</span><br><span class="line">          securityContext:</span><br><span class="line">            privileged: true</span><br><span class="line">          image: dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br><span class="line">          args:</span><br><span class="line">            - &quot;--type&#x3D;liveness&quot;</span><br><span class="line">            - &quot;--endpoint&#x3D;$(CSI_ENDPOINT)&quot;</span><br><span class="line">            - &quot;--metricsport&#x3D;8680&quot;</span><br><span class="line">            - &quot;--metricspath&#x3D;&#x2F;metrics&quot;</span><br><span class="line">            - &quot;--polltime&#x3D;60s&quot;</span><br><span class="line">            - &quot;--timeout&#x3D;3s&quot;</span><br><span class="line">          env:</span><br><span class="line">            - name: CSI_ENDPOINT</span><br><span class="line">              value: unix:&#x2F;&#x2F;&#x2F;csi&#x2F;csi.sock</span><br><span class="line">            - name: POD_IP</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  fieldPath: status.podIP</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: socket-dir</span><br><span class="line">              mountPath: &#x2F;csi</span><br><span class="line">          imagePullPolicy: &quot;IfNotPresent&quot;</span><br><span class="line">      volumes:</span><br><span class="line">        - name: socket-dir</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins&#x2F;rbd.csi.ceph.com</span><br><span class="line">            type: DirectoryOrCreate</span><br><span class="line">        - name: plugin-dir</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins</span><br><span class="line">            type: Directory</span><br><span class="line">        - name: mountpoint-dir</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;pods</span><br><span class="line">            type: DirectoryOrCreate</span><br><span class="line">        - name: ceph-logdir</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;var&#x2F;log&#x2F;ceph</span><br><span class="line">            type: DirectoryOrCreate</span><br><span class="line">        - name: registration-dir</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;var&#x2F;lib&#x2F;kubelet&#x2F;plugins_registry&#x2F;</span><br><span class="line">            type: Directory</span><br><span class="line">        - name: host-dev</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;dev</span><br><span class="line">        - name: host-sys</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;sys</span><br><span class="line">        - name: etc-selinux</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;etc&#x2F;selinux</span><br><span class="line">        - name: host-mount</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;run&#x2F;mount</span><br><span class="line">        - name: lib-modules</span><br><span class="line">          hostPath:</span><br><span class="line">            path: &#x2F;lib&#x2F;modules</span><br><span class="line">        #- name: ceph-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-config</span><br><span class="line">        - name: ceph-csi-config</span><br><span class="line">          configMap:</span><br><span class="line">            name: ceph-csi-config</span><br><span class="line">        #- name: ceph-csi-encryption-kms-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-csi-encryption-kms-config</span><br><span class="line">        - name: keys-tmp-dir</span><br><span class="line">          emptyDir: &#123;</span><br><span class="line">            medium: &quot;Memory&quot;</span><br><span class="line">          &#125;</span><br><span class="line">---</span><br><span class="line"># This is a service to expose the liveness metrics</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-metrics-rbdplugin</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  namespace: default</span><br><span class="line">  labels:</span><br><span class="line">    app: csi-metrics</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">    - name: http-metrics</span><br><span class="line">      port: 8080</span><br><span class="line">      protocol: TCP</span><br><span class="line">      targetPort: 8680</span><br><span class="line">  selector:</span><br><span class="line">    app: csi-rbdplugin</span><br></pre></td></tr></table></figure><blockquote><p>修改csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml文件，注释关于ceph-csi-encryption-kms-config与ceph-config配置：</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# grep  &quot;#&quot; csi-rbdplugin-provisioner.yaml</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">  # replace with non-default namespace name</span><br><span class="line">            #  set it to true to use topology based provisioning</span><br><span class="line">            # if fstype is not specified in storageclass, ext4 is default</span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">            # - name: KMS_CONFIGMAP_NAME</span><br><span class="line">            #   value: encryptionConfig</span><br><span class="line">           # - name: ceph-csi-encryption-kms-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph-csi-encryption-kms-config&#x2F;</span><br><span class="line">           # - name: ceph-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph&#x2F;</span><br><span class="line">          # for stable functionality replace canary with latest release version</span><br><span class="line">           # - name: ceph-config</span><br><span class="line">           #   mountPath: &#x2F;etc&#x2F;ceph&#x2F;</span><br><span class="line">        #- name: ceph-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-config</span><br><span class="line">        #- name: ceph-csi-encryption-kms-config</span><br><span class="line">        #  configMap:</span><br><span class="line">        #    name: ceph-csi-encryption-kms-config</span><br><span class="line"></span><br></pre></td></tr></table></figure><blockquote><p> 注意：所使用的镜像以及修改为本地仓库镜像，请根据自己网络环境调整</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-resizer:v1.3.0</span><br><span class="line">dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-snapshotter:v4.2.0</span><br><span class="line">dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-provisioner:v3.0.0</span><br><span class="line">dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-node-driver-registrar:v2.3.0</span><br><span class="line">dockerhub.kubekey.local&#x2F;k8s.gcr.io&#x2F;sig-storage&#x2F;csi-attacher:v3.3.0</span><br><span class="line">dockerhub.kubekey.local&#x2F;quay.io&#x2F;cephcsi&#x2F;cephcsi:canary</span><br></pre></td></tr></table></figure><p>部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-rbdplugin-provisioner.yaml</span><br><span class="line">kubectl apply -f csi-rbdplugin.yaml</span><br></pre></td></tr></table></figure><p>查看运行状态</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl get pods </span><br><span class="line">NAME                                         READY   STATUS    RESTARTS   AGE</span><br><span class="line"></span><br><span class="line">csi-rbdplugin-5jb79                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-7dqd7                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-8dpnb                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-4clkc   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-lbjld   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-vpvb2   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-txjcg                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-x57d6                          3&#x2F;3     Running   0          22h</span><br></pre></td></tr></table></figure><h3 id="使用ceph块儿设备"><a href="#使用ceph块儿设备" class="headerlink" title="使用ceph块儿设备"></a>使用ceph块儿设备</h3><h4 id="创建storageclass"><a href="#创建storageclass" class="headerlink" title="创建storageclass"></a>创建storageclass</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat csi-rbd-sc.yaml</span><br><span class="line">---</span><br><span class="line">apiVersion: storage.k8s.io&#x2F;v1</span><br><span class="line">kind: StorageClass</span><br><span class="line">metadata:</span><br><span class="line">   name: csi-rbd-sc</span><br><span class="line">provisioner: rbd.csi.ceph.com</span><br><span class="line">parameters:</span><br><span class="line">   clusterID: 3a2a06c7-124f-4703-b798-88eb2950361e</span><br><span class="line">   pool: rbd</span><br><span class="line">   imageFeatures: layering</span><br><span class="line">   csi.storage.k8s.io&#x2F;provisioner-secret-name: csi-rbd-secret</span><br><span class="line">   csi.storage.k8s.io&#x2F;provisioner-secret-namespace: default</span><br><span class="line">   csi.storage.k8s.io&#x2F;controller-expand-secret-name: csi-rbd-secret</span><br><span class="line">   csi.storage.k8s.io&#x2F;controller-expand-secret-namespace: default</span><br><span class="line">   csi.storage.k8s.io&#x2F;node-stage-secret-name: csi-rbd-secret</span><br><span class="line">   csi.storage.k8s.io&#x2F;node-stage-secret-namespace: default</span><br><span class="line">   csi.storage.k8s.io&#x2F;fstype: ext4</span><br><span class="line">reclaimPolicy: Delete</span><br><span class="line">allowVolumeExpansion: true</span><br><span class="line">mountOptions:</span><br><span class="line">   - discard</span><br></pre></td></tr></table></figure><ul><li>clusterID对应之前的步骤中的fsid</li><li>imageFeatures，这个是用来确定创建的image的特征的</li><li>allowVolumeExpansion: true 是否开启在线扩容</li></ul><p>部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f csi-rbd-sc.yaml</span><br></pre></td></tr></table></figure><h4 id="查看storageclass："><a href="#查看storageclass：" class="headerlink" title="查看storageclass："></a>查看storageclass：</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]#  kubectl get storageclass</span><br><span class="line">NAME              PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE</span><br><span class="line"></span><br><span class="line">csi-rbd-sc        rbd.csi.ceph.com   Delete          Immediate              true                   22h</span><br><span class="line">local (default)   openebs.io&#x2F;local   Delete          WaitForFirstConsumer   false                  5d23h</span><br></pre></td></tr></table></figure><h4 id="创建PVC"><a href="#创建PVC" class="headerlink" title="创建PVC"></a>创建PVC</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat raw-block-pvc.yaml</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  name: raw-block-pvc</span><br><span class="line">spec:</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  volumeMode: Block</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 1Gi</span><br><span class="line">  storageClassName: csi-rbd-sc</span><br></pre></td></tr></table></figure><blockquote><p>理论上volumeMode应该指定为Block的，要求PVC和控制器中都指定为相同的模式才能挂载使用，但是经过验证在应用端也指定Block，还是不能挂载上，因此就都去掉了，变成了默认的Filesystem</p></blockquote><p>部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f raw-block-pvc.yaml </span><br></pre></td></tr></table></figure><h4 id="查看pvc"><a href="#查看pvc" class="headerlink" title="查看pvc"></a>查看pvc</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]#  kubectl get pvc</span><br><span class="line">NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE</span><br><span class="line">raw-block-pvc   Bound    pvc-23bb1905-2e26-4ce1-8616-2754dd36317f   1Gi        RWO            csi-rbd-sc     22h</span><br><span class="line"></span><br></pre></td></tr></table></figure><h4 id="创建使用PVC的应用测试无状态Pod"><a href="#创建使用PVC的应用测试无状态Pod" class="headerlink" title="创建使用PVC的应用测试无状态Pod"></a>创建使用PVC的应用测试无状态Pod</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# cat raw-block-pod.yaml</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Pod</span><br><span class="line">metadata:</span><br><span class="line">  name: pod-with-raw-block-volume</span><br><span class="line">spec:</span><br><span class="line">  containers:</span><br><span class="line">    - name: fc-container</span><br><span class="line">      image: fedora:26</span><br><span class="line">      command: [&quot;&#x2F;bin&#x2F;sh&quot;, &quot;-c&quot;]</span><br><span class="line">      args: [&quot;tail -f &#x2F;dev&#x2F;null&quot;]</span><br><span class="line">      volumeDevices:</span><br><span class="line">        - name: data</span><br><span class="line">          devicePath: &#x2F;dev&#x2F;xvda</span><br><span class="line">  volumes:</span><br><span class="line">    - name: data</span><br><span class="line">      persistentVolumeClaim:</span><br><span class="line">        claimName: raw-block-pvc</span><br></pre></td></tr></table></figure><p>部署</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f raw-block-pod.yaml</span><br></pre></td></tr></table></figure><p>查看</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl get pods </span><br><span class="line">NAME                                         READY   STATUS    RESTARTS   AGE</span><br><span class="line"></span><br><span class="line">csi-rbdplugin-5jb79                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-7dqd7                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-8dpnb                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-4clkc   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-lbjld   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-provisioner-66557fcc8f-vpvb2   7&#x2F;7     Running   0          22h</span><br><span class="line">csi-rbdplugin-txjcg                          3&#x2F;3     Running   0          22h</span><br><span class="line">csi-rbdplugin-x57d6                          3&#x2F;3     Running   0          22h</span><br><span class="line"></span><br><span class="line">pod-with-raw-block-volume                    1&#x2F;1     Running   0          22h</span><br></pre></td></tr></table></figure><h5 id="应用测试扩容"><a href="#应用测试扩容" class="headerlink" title="应用测试扩容"></a>应用测试扩容</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl edit pvc raw-block-pvc #&#96;raw-block-pvc&#96; 想要扩容的pvc，打开pvc修改容量</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line"># Please edit the object below. Lines beginning with a &#39;#&#39; will be ignored,</span><br><span class="line"># and an empty file will abort the edit. If an error occurs while saving this file will be</span><br><span class="line"># reopened with the relevant failures.</span><br><span class="line">#</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  annotations:</span><br><span class="line">    kubectl.kubernetes.io&#x2F;last-applied-configuration: |</span><br><span class="line">      &#123;&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;PersistentVolumeClaim&quot;,&quot;metadata&quot;:&#123;&quot;annotations&quot;:&#123;&#125;,&quot;name&quot;:&quot;raw-block-pvc&quot;,&quot;namespace&quot;:&quot;default&quot;&#125;,&quot;spec&quot;:&#123;&quot;accessModes&quot;:[&quot;ReadWriteOnce&quot;],&quot;resources&quot;:&#123;&quot;requests&quot;:&#123;&quot;storage&quot;:&quot;1Gi&quot;&#125;&#125;,&quot;storageClassName&quot;:&quot;csi-rbd-sc&quot;,&quot;volumeMode&quot;:&quot;Block&quot;&#125;&#125;</span><br><span class="line">    pv.kubernetes.io&#x2F;bind-completed: &quot;yes&quot;</span><br><span class="line">    pv.kubernetes.io&#x2F;bound-by-controller: &quot;yes&quot;</span><br><span class="line">    volume.beta.kubernetes.io&#x2F;storage-provisioner: rbd.csi.ceph.com</span><br><span class="line">  creationTimestamp: &quot;2022-01-10T04:01:31Z&quot;</span><br><span class="line">  finalizers:</span><br><span class="line">  - kubernetes.io&#x2F;pvc-protection</span><br><span class="line">  name: raw-block-pvc</span><br><span class="line">  namespace: default</span><br><span class="line">  resourceVersion: &quot;1142767&quot;</span><br><span class="line">  uid: 18eb2ee1-3eac-4567-9d07-a449ce0ac675</span><br><span class="line">spec:</span><br><span class="line">  accessModes:</span><br><span class="line">  - ReadWriteOnce</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 15Gi             # 修改此处的容量保存退出即可</span><br><span class="line">  storageClassName: csi-rbd-sc</span><br><span class="line">  volumeMode: Block</span><br><span class="line">  volumeName: pvc-18eb2ee1-3eac-4567-9d07-a449ce0ac675</span><br><span class="line">status:</span><br><span class="line">  accessModes:</span><br><span class="line">  - ReadWriteOnce</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 15Gi</span><br><span class="line">  phase: Bound</span><br></pre></td></tr></table></figure><h5 id="查看pvc-1"><a href="#查看pvc-1" class="headerlink" title="查看pvc"></a>查看pvc</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl get pvc</span><br><span class="line">NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE</span><br><span class="line">data-csi-mysql-0       Bound    pvc-e55185b9-fa17-48ad-b125-929d7b01e5a0   5Gi        RWO            csi-rbd-sc        24m</span><br><span class="line">raw-block-pvc          Bound    pvc-18eb2ee1-3eac-4567-9d07-a449ce0ac675   15Gi       RWO            csi-rbd-sc        102m</span><br><span class="line">rbd-pvc-bak            Bound    pvc-6ff9dc5c-b39e-410d-909c-bdd01db765a1   1Gi        RWO            csi-rbd-sc-pv     164m</span><br></pre></td></tr></table></figure><blockquote><p>扩容完成</p></blockquote><h4 id="创建使用PVC的应用测试有状态Pod"><a href="#创建使用PVC的应用测试有状态Pod" class="headerlink" title="创建使用PVC的应用测试有状态Pod"></a>创建使用PVC的应用测试有状态Pod</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br></pre></td><td class="code"><pre><span class="line">vim mysql-statefulset-static.yaml </span><br><span class="line">---</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: StatefulSet</span><br><span class="line">metadata:</span><br><span class="line">  name: csi-mysql</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: mysql</span><br><span class="line">  serviceName: mysql</span><br><span class="line">  replicas: 1</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: mysql</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: mysql</span><br><span class="line">        image: mysql:5.7</span><br><span class="line">        env:</span><br><span class="line">        - name: MYSQL_ALLOW_EMPTY_PASSWORD</span><br><span class="line">          value: &quot;1&quot;</span><br><span class="line">        - name: MYSQL_ROOT_PASSWORD</span><br><span class="line">          value: &quot;dlw123&quot;</span><br><span class="line">        ports:</span><br><span class="line">        - name: mysql</span><br><span class="line">          containerPort: 3306</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: data</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">          subPath: mysql</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            cpu: 500m</span><br><span class="line">            memory: 1Gi</span><br><span class="line">     # volumes:</span><br><span class="line">     # - name: data</span><br><span class="line">     #   persistentVolumeClaim:</span><br><span class="line">     #    claimName: csi-rbd-sc</span><br><span class="line">  volumeClaimTemplates:</span><br><span class="line">  - metadata:</span><br><span class="line">      name: data</span><br><span class="line">    spec:</span><br><span class="line">      accessModes: [ &quot;ReadWriteOnce&quot; ]</span><br><span class="line">      storageClassName: &quot;csi-rbd-sc&quot;</span><br><span class="line">      resources:</span><br><span class="line">        requests:</span><br><span class="line">          storage: 5Gi</span><br></pre></td></tr></table></figure><blockquote><p>对于有状态服务来说，如果还是直接使用volumes，则进行动态扩容的时候会报错，所有的Pod都会使用一个相同的PVC，会产生冲突，因此需要使用VolumeClaimTemplate来创建PV。</p></blockquote><h5 id="应用测试扩容-1"><a href="#应用测试扩容-1" class="headerlink" title="应用测试扩容"></a>应用测试扩容</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl edit pvc data-csi-mysql-0 #&#96;data-csi-mysql-0&#96; 想要扩容的pvc，打开pvc修改容量</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line"># Please edit the object below. Lines beginning with a &#39;#&#39; will be ignored,</span><br><span class="line"># and an empty file will abort the edit. If an error occurs while saving this file will be</span><br><span class="line"># reopened with the relevant failures.</span><br><span class="line">#</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  annotations:</span><br><span class="line">    pv.kubernetes.io&#x2F;bind-completed: &quot;yes&quot;</span><br><span class="line">    pv.kubernetes.io&#x2F;bound-by-controller: &quot;yes&quot;</span><br><span class="line">    volume.beta.kubernetes.io&#x2F;storage-provisioner: rbd.csi.ceph.com</span><br><span class="line">  creationTimestamp: &quot;2022-01-10T05:19:37Z&quot;</span><br><span class="line">  finalizers:</span><br><span class="line">  - kubernetes.io&#x2F;pvc-protection</span><br><span class="line">  labels:</span><br><span class="line">    app: mysql-bak</span><br><span class="line">  name: data-csi-mysql-0</span><br><span class="line">  namespace: default</span><br><span class="line">  resourceVersion: &quot;1147968&quot;</span><br><span class="line">  uid: e55185b9-fa17-48ad-b125-929d7b01e5a0</span><br><span class="line">spec:</span><br><span class="line">  accessModes:</span><br><span class="line">  - ReadWriteOnce</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 10Gi             # 修改此处的容量保存退出即可</span><br><span class="line">  storageClassName: csi-rbd-sc</span><br><span class="line">  volumeMode: Filesystem</span><br><span class="line">  volumeName: pvc-e55185b9-fa17-48ad-b125-929d7b01e5a0</span><br><span class="line">status:</span><br><span class="line">  accessModes:</span><br><span class="line">  - ReadWriteOnce</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 5Gi</span><br><span class="line">  phase: Bound</span><br></pre></td></tr></table></figure><h5 id="查看扩容状态"><a href="#查看扩容状态" class="headerlink" title="查看扩容状态"></a>查看扩容状态</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl describe pvc data-csi-mysql-0</span><br><span class="line">Name:          data-csi-mysql-0</span><br><span class="line">Namespace:     default</span><br><span class="line">StorageClass:  csi-rbd-sc</span><br><span class="line">Status:        Bound</span><br><span class="line">Volume:        pvc-e55185b9-fa17-48ad-b125-929d7b01e5a0</span><br><span class="line">Labels:        app&#x3D;mysql-bak</span><br><span class="line">Annotations:   pv.kubernetes.io&#x2F;bind-completed: yes</span><br><span class="line">               pv.kubernetes.io&#x2F;bound-by-controller: yes</span><br><span class="line">               volume.beta.kubernetes.io&#x2F;storage-provisioner: rbd.csi.ceph.com</span><br><span class="line">Finalizers:    [kubernetes.io&#x2F;pvc-protection]</span><br><span class="line">Capacity:      5Gi</span><br><span class="line">Access Modes:  RWO</span><br><span class="line">VolumeMode:    Filesystem</span><br><span class="line">Used By:       csi-mysql-0</span><br><span class="line">Conditions:</span><br><span class="line">  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message</span><br><span class="line">  ----                      ------  -----------------                 ------------------                ------  -------</span><br><span class="line">  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Mon, 10 Jan 2022 13:52:21 +0800           Waiting for user to (re-)start a pod to finish file system resize of volume on node.</span><br><span class="line">···</span><br></pre></td></tr></table></figure><blockquote><p>需要重新部署pod生效</p></blockquote><h5 id="更新pod"><a href="#更新pod" class="headerlink" title="更新pod"></a>更新pod</h5><p>查看应用</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl get StatefulSet #有状态应用</span><br><span class="line">kubectl get Deployment  #无状态应用</span><br></pre></td></tr></table></figure><p>副本伸缩</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl scale StatefulSet csi-mysql --replicas 0  #副本缩容</span><br><span class="line">kubectl scale StatefulSet csi-mysql --replicas 1  #副本扩容</span><br></pre></td></tr></table></figure><h5 id="查看扩容状态-1"><a href="#查看扩容状态-1" class="headerlink" title="查看扩容状态"></a>查看扩容状态</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl describe pvc data-csi-mysql-0</span><br><span class="line">Name:          data-csi-mysql-0</span><br><span class="line">Namespace:     default</span><br><span class="line">StorageClass:  csi-rbd-sc</span><br><span class="line">Status:        Bound</span><br><span class="line">Volume:        pvc-e55185b9-fa17-48ad-b125-929d7b01e5a0</span><br><span class="line">Labels:        app&#x3D;mysql</span><br><span class="line">Annotations:   pv.kubernetes.io&#x2F;bind-completed: yes</span><br><span class="line">               pv.kubernetes.io&#x2F;bound-by-controller: yes</span><br><span class="line">               volume.beta.kubernetes.io&#x2F;storage-provisioner: rbd.csi.ceph.com</span><br><span class="line">Finalizers:    [kubernetes.io&#x2F;pvc-protection]</span><br><span class="line">Capacity:      10Gi</span><br><span class="line">Access Modes:  RWO</span><br><span class="line">VolumeMode:    Filesystem</span><br><span class="line">Used By:       csi-mysql-0</span><br><span class="line">Events:</span><br><span class="line">···</span><br></pre></td></tr></table></figure><h5 id="查看pvc-2"><a href="#查看pvc-2" class="headerlink" title="查看pvc"></a>查看pvc</h5><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@master-1 ~]# kubectl get pvc</span><br><span class="line">NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE</span><br><span class="line">data-csi-mysql-0       Bound    pvc-e55185b9-fa17-48ad-b125-929d7b01e5a0   5Gi        RWO            csi-rbd-sc        24m</span><br><span class="line">raw-block-pvc          Bound    pvc-18eb2ee1-3eac-4567-9d07-a449ce0ac675   15Gi       RWO            csi-rbd-sc        102m</span><br><span class="line">rbd-pvc-bak            Bound    pvc-6ff9dc5c-b39e-410d-909c-bdd01db765a1   1Gi        RWO            csi-rbd-sc-pv     164m</span><br></pre></td></tr></table></figure><blockquote><p>扩容完成</p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;h3 id=&quot;创建一个ceph-pool-创建存储池&quot;&gt;&lt;a href=&quot;#创建一个ceph-pool-创建存储池&quot; class=&quot;headerlink&quot; title=&quot;创建一个ceph pool 创建存储池&quot;&gt;&lt;/a&gt;创建一个ceph pool 创建存储池&lt;/h3&gt;&lt;div </summary>
      
    
    
    
    <category term="ceph" scheme="https://imszz.com/categories/ceph/"/>
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/ceph/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="ceph" scheme="https://imszz.com/tags/ceph/"/>
    
    <category term="ceph-csi" scheme="https://imszz.com/tags/ceph-csi/"/>
    
  </entry>
  
  <entry>
    <title>Linux类型虚拟机磁盘扩容</title>
    <link href="https://imszz.com/p/6ca3a57f/"/>
    <id>https://imszz.com/p/6ca3a57f/</id>
    <published>2021-12-09T16:00:00.000Z</published>
    <updated>2021-12-10T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="1-1-Linux类型虚拟机磁盘扩容"><a href="#1-1-Linux类型虚拟机磁盘扩容" class="headerlink" title="1.1 Linux类型虚拟机磁盘扩容"></a>1.1 Linux类型虚拟机磁盘扩容</h2><h3 id="步骤1-查看磁盘状态"><a href="#步骤1-查看磁盘状态" class="headerlink" title="步骤1 查看磁盘状态"></a>步骤1 查看磁盘状态</h3><p>在虚拟机操作系统内的命令行终端上再次执行“fdisk -l”，发现虚拟磁盘总共有416101个柱面，但只使用了其中的208051个柱面，未被使用的柱面就是扩容之后的磁盘，下面需要为未被使用的柱面创建分区。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2  ~]# fdisk -l</span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;vda: 214.7 GB, 214748364800 bytes</span><br><span class="line">16 heads, 63 sectors&#x2F;track, 416101 cylinders</span><br><span class="line">Units &#x3D; cylinders of 1008 * 512 &#x3D; 516096 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00091944</span><br><span class="line"> </span><br><span class="line">   Device Boot      Start         End      Blocks   Id  System</span><br><span class="line">&#x2F;dev&#x2F;vda1   *           3        1018      512000   83  Linux</span><br><span class="line">Partition 1 does not end on cylinder boundary.</span><br><span class="line">&#x2F;dev&#x2F;vda2            1018      208051   104344576   8e  Linux LVM</span><br><span class="line">Partition 2 does not end on cylinder boundary.</span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;mapper&#x2F;centos-root: 53.7 GB, 53687091200 bytes</span><br><span class="line">255 heads, 63 sectors&#x2F;track, 6527 cylinders</span><br><span class="line">Units &#x3D; cylinders of 16065 * 512 &#x3D; 8225280 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00000000</span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;mapper&#x2F;centos-swap: 4093 MB, 4093640704 bytes</span><br><span class="line">255 heads, 63 sectors&#x2F;track, 497 cylinders</span><br><span class="line">Units &#x3D; cylinders of 16065 * 512 &#x3D; 8225280 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00000000</span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;mapper&#x2F;centos-home: 49.1 GB, 49064968192 bytes</span><br><span class="line">255 heads, 63 sectors&#x2F;track, 5965 cylinders</span><br><span class="line">Units &#x3D; cylinders of 16065 * 512 &#x3D; 8225280 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00000000</span><br><span class="line">[root@yjgltpc-cgzs-2  ~]#</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="步骤2-创建新的分区。"><a href="#步骤2-创建新的分区。" class="headerlink" title="步骤2 创建新的分区。"></a>步骤2 创建新的分区。</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2  ~]# fdisk &#x2F;dev&#x2F;vda</span><br><span class="line"> </span><br><span class="line">WARNING: DOS-compatible mode is deprecated. It&#39;s strongly recommended to</span><br><span class="line">         switch off the mode (command &#39;c&#39;) and change display units to</span><br><span class="line">         sectors (command &#39;u&#39;).</span><br><span class="line"> </span><br><span class="line">Command (m for help): n # 键入“n”创建新的分区</span><br><span class="line">Command action</span><br><span class="line">   e   extended</span><br><span class="line">   p   primary partition (1-4)</span><br><span class="line">e # 键入“e”创建扩展分区</span><br><span class="line">Partition number (1-4): 3</span><br><span class="line">First cylinder (1-416101, default 1): 208051 # 键入起始柱面从“208051”开始</span><br><span class="line">Last cylinder, +cylinders or +size&#123;K,M,G&#125; (208051-416101, default 416101): # 直接回车</span><br><span class="line">Using default value 416101</span><br><span class="line"> </span><br><span class="line">Command (m for help): p # 键入“p”查看分区创建情况</span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;vda: 214.7 GB, 214748364800 bytes</span><br><span class="line">16 heads, 63 sectors&#x2F;track, 416101 cylinders</span><br><span class="line">Units &#x3D; cylinders of 1008 * 512 &#x3D; 516096 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00091944</span><br><span class="line"> </span><br><span class="line">   Device Boot      Start         End      Blocks   Id  System</span><br><span class="line">&#x2F;dev&#x2F;vda1   *           3        1018      512000   83  Linux</span><br><span class="line">Partition 1 does not end on cylinder boundary.</span><br><span class="line">&#x2F;dev&#x2F;vda2            1018      208051   104344576   8e  Linux LVM</span><br><span class="line">Partition 2 does not end on cylinder boundary.</span><br><span class="line">&#x2F;dev&#x2F;vda3          208051      416101   104857304    5  Extended</span><br><span class="line"> </span><br><span class="line">Command (m for help): n # 键入“n”创建逻辑分区</span><br><span class="line">Command action</span><br><span class="line">   l   logical (5 or over)</span><br><span class="line">   p   primary partition (1-4)</span><br><span class="line">l # 键入“l”选择逻辑分区</span><br><span class="line">First cylinder (208051-416101, default 208051): # 直接回车</span><br><span class="line">Using default value 208051</span><br><span class="line">Last cylinder, +cylinders or +size&#123;K,M,G&#125; (208051-416101, default 416101):</span><br><span class="line">Using default value 416101</span><br><span class="line"> </span><br><span class="line">Command (m for help): p # 键入“p”显示所有分区情况</span><br><span class="line"> </span><br><span class="line">Disk &#x2F;dev&#x2F;vda: 214.7 GB, 214748364800 bytes</span><br><span class="line">16 heads, 63 sectors&#x2F;track, 416101 cylinders</span><br><span class="line">Units &#x3D; cylinders of 1008 * 512 &#x3D; 516096 bytes</span><br><span class="line">Sector size (logical&#x2F;physical): 512 bytes &#x2F; 512 bytes</span><br><span class="line">I&#x2F;O size (minimum&#x2F;optimal): 512 bytes &#x2F; 512 bytes</span><br><span class="line">Disk identifier: 0x00091944</span><br><span class="line"> </span><br><span class="line">   Device Boot      Start         End      Blocks   Id  System</span><br><span class="line">&#x2F;dev&#x2F;vda1   *           3        1018      512000   83  Linux</span><br><span class="line">Partition 1 does not end on cylinder boundary.</span><br><span class="line">&#x2F;dev&#x2F;vda2            1018      208051   104344576   8e  Linux LVM</span><br><span class="line">Partition 2 does not end on cylinder boundary.</span><br><span class="line">&#x2F;dev&#x2F;vda3          208051      416101   104857304    5  Extended</span><br><span class="line">&#x2F;dev&#x2F;vda5          208051      416101   104857272+  83  Linux</span><br><span class="line"> </span><br><span class="line">Command (m for help): w # 键入“w”保存分区</span><br><span class="line">The partition table has been altered!</span><br><span class="line"> </span><br><span class="line">Calling ioctl() to re-read partition table.</span><br><span class="line"> </span><br><span class="line">WARNING: Re-reading the partition table failed with error 16: Device or resource busy.</span><br><span class="line">The kernel still uses the old table. The new table will be used at</span><br><span class="line">the next reboot or after you run partprobe(8) or kpartx(8)</span><br><span class="line">Syncing disks.</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br></pre></td></tr></table></figure><h3 id="步骤3-重新启动虚拟机操作系统之后，对逻辑分区进行格式化。"><a href="#步骤3-重新启动虚拟机操作系统之后，对逻辑分区进行格式化。" class="headerlink" title="步骤3 重新启动虚拟机操作系统之后，对逻辑分区进行格式化。"></a>步骤3 重新启动虚拟机操作系统之后，对逻辑分区进行格式化。</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2 ~]# mkfs.ext4 &#x2F;dev&#x2F;vda5 # 格式化为ext4文件系统</span><br><span class="line">mke2fs 1.41.12 (17-May-2010)</span><br><span class="line">Filesystem label&#x3D;</span><br><span class="line">OS type: Linux</span><br><span class="line">Block size&#x3D;4096 (log&#x3D;2)</span><br><span class="line">Fragment size&#x3D;4096 (log&#x3D;2)</span><br><span class="line">Stride&#x3D;0 blocks, Stripe width&#x3D;0 blocks</span><br><span class="line">6553600 inodes, 26214318 blocks</span><br><span class="line">1310715 blocks (5.00%) reserved for the super user</span><br><span class="line">First data block&#x3D;0</span><br><span class="line">Maximum filesystem blocks&#x3D;4294967296</span><br><span class="line">800 block groups</span><br><span class="line">32768 blocks per group, 32768 fragments per group</span><br><span class="line">8192 inodes per group</span><br><span class="line">Superblock backups stored on blocks:</span><br><span class="line">32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,</span><br><span class="line">4096000, 7962624, 11239424, 20480000, 23887872</span><br><span class="line"> </span><br><span class="line">Writing inode tables: done                            </span><br><span class="line">Creating journal (32768 blocks): done</span><br><span class="line">Writing superblocks and filesystem accounting information: done</span><br><span class="line"> </span><br><span class="line">This filesystem will be automatically checked every 27 mounts or</span><br><span class="line">180 days, whichever comes first.  Use tune2fs -c or -i to override.</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="步骤4-创建物理卷（PV）"><a href="#步骤4-创建物理卷（PV）" class="headerlink" title="步骤4 创建物理卷（PV）"></a>步骤4 创建物理卷（PV）</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2 ~]# pvcreate &#x2F;dev&#x2F;vda5</span><br><span class="line">  Physical volume &quot;&#x2F;dev&#x2F;vda5&quot; successfully created</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br></pre></td></tr></table></figure><h3 id="步骤5-查看当前卷组情况。"><a href="#步骤5-查看当前卷组情况。" class="headerlink" title="步骤5 查看当前卷组情况。"></a>步骤5 查看当前卷组情况。</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2 ~]# vgdisplay</span><br><span class="line">  --- Volume group ---</span><br><span class="line">  VG Name               centos</span><br><span class="line">  System ID             </span><br><span class="line">  Format                lvm2</span><br><span class="line">  Metadata Areas        1</span><br><span class="line">  Metadata Sequence No  4</span><br><span class="line">  VG Access             read&#x2F;write</span><br><span class="line">  VG Status             resizable</span><br><span class="line">  MAX LV                0</span><br><span class="line">  Cur LV                3</span><br><span class="line">  Open LV               3</span><br><span class="line">  Max PV                0</span><br><span class="line">  Cur PV                1</span><br><span class="line">  Act PV                1</span><br><span class="line">  VG Size               99.51 GiB</span><br><span class="line">  PE Size               4.00 MiB</span><br><span class="line">  Total PE              25474</span><br><span class="line">  Alloc PE &#x2F; Size       25474 &#x2F; 99.51 GiB</span><br><span class="line">  Free  PE &#x2F; Size       0 &#x2F; 0    # 表示没有可用的扩展空间</span><br><span class="line">  VG UUID               YYbZEp-ddOk-gdIC-h0dU-seBF-Enlx-SeYIpP</span><br><span class="line">   </span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br><span class="line"> </span><br></pre></td></tr></table></figure><h3 id="步骤6-扩展卷组"><a href="#步骤6-扩展卷组" class="headerlink" title="步骤6 扩展卷组"></a>步骤6 扩展卷组</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">## 再次查看卷组，发现可扩展空间为100GB。</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]# vgextend &#x2F;dev&#x2F;centos &#x2F;dev&#x2F;vda5</span><br><span class="line">  Volume group &quot;VolGroup&quot; successfully extended</span><br><span class="line">[root@localhost ~]# vgdisplay</span><br><span class="line">  --- Volume group ---</span><br><span class="line">  VG Name               centos</span><br><span class="line">  System ID             </span><br><span class="line">  Format                lvm2</span><br><span class="line">  Metadata Areas        2</span><br><span class="line">  Metadata Sequence No  5</span><br><span class="line">  VG Access             read&#x2F;write</span><br><span class="line">  VG Status             resizable</span><br><span class="line">  MAX LV                0</span><br><span class="line">  Cur LV                3</span><br><span class="line">  Open LV               3</span><br><span class="line">  Max PV                0</span><br><span class="line">  Cur PV                2</span><br><span class="line">  Act PV                2</span><br><span class="line">  VG Size               199.50 GiB</span><br><span class="line">  PE Size               4.00 MiB</span><br><span class="line">  Total PE              51073</span><br><span class="line">  Alloc PE &#x2F; Size       25474 &#x2F; 99.51 GiB</span><br><span class="line">  Free  PE &#x2F; Size       25599 &#x2F; 100.00 GiB</span><br><span class="line">  VG UUID               YYbZEp-ddOk-gdIC-h0dU-seBF-Enlx-SeYIpP</span><br><span class="line">   </span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br></pre></td></tr></table></figure><h3 id="步骤7-扩展根分区逻辑卷的容量。"><a href="#步骤7-扩展根分区逻辑卷的容量。" class="headerlink" title="步骤7 扩展根分区逻辑卷的容量。"></a>步骤7 扩展根分区逻辑卷的容量。</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2 ~]# lvextend -l +100%FREE &#x2F;dev&#x2F;centos&#x2F;root  # 扩展所有可用空间到根分区</span><br><span class="line">  Extending logical volume lv_root to 150.00 GiB</span><br><span class="line">  Logical volume lv_root successfully resized</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br></pre></td></tr></table></figure><h3 id="步骤8-文件系统的真正扩容"><a href="#步骤8-文件系统的真正扩容" class="headerlink" title="步骤8 文件系统的真正扩容"></a>步骤8 文件系统的真正扩容</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">#上面只是卷扩容了，下面是文件系统的真正扩容，输入以下命令：</span><br><span class="line">#CentOS7下面由于使用的是XFS命令:</span><br><span class="line">#xfs_growfs针对文件系统xfs</span><br><span class="line">#检查数据块大小和数量</span><br><span class="line"></span><br><span class="line">xfs_growfs info &#x2F;dev&#x2F;centos&#x2F;root</span><br><span class="line"></span><br><span class="line">#将XFS文件扩展到1986208</span><br><span class="line"></span><br><span class="line">xfs_growfs &#x2F;dev&#x2F;centos&#x2F;root -D 1986208</span><br><span class="line"></span><br><span class="line">#自动扩展XFS文件系统到最大的可用大小</span><br><span class="line"></span><br><span class="line">xfs_growfs &#x2F;dev&#x2F;centos&#x2F;root</span><br><span class="line"></span><br><span class="line">#&#x2F;dev&#x2F;mapper&#x2F;centos-root是df -h查看到根目录的挂载点,需要扩容的挂载点</span><br><span class="line"></span><br><span class="line">xfs_growfs &#x2F;dev&#x2F;centos&#x2F;root</span><br><span class="line"></span><br><span class="line"> </span><br><span class="line">#CentOS6使用命令:</span><br><span class="line">#使用resize2fs对挂载目录在线扩容#resize2fs针对文件系统ext2 ext3 ext4</span><br><span class="line">resize2fs &#x2F;dev&#x2F;centos&#x2F;root</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="步骤9-查看分区情况"><a href="#步骤9-查看分区情况" class="headerlink" title="步骤9 查看分区情况"></a>步骤9 查看分区情况</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">#发现根分区磁盘容量从原来的“50GB”扩容到“~150GB”。</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]# df -h</span><br><span class="line">Filesystem                    Size  Used Avail Use% Mounted on</span><br><span class="line">&#x2F;dev&#x2F;mapper&#x2F;VolGroup-lv_root  148G  2.9G  138G   3% &#x2F;</span><br><span class="line">tmpfs                         2.0G  224K  2.0G   1% &#x2F;dev&#x2F;shm</span><br><span class="line">&#x2F;dev&#x2F;vda1                     485M   39M  421M   9% &#x2F;boot</span><br><span class="line">&#x2F;dev&#x2F;mapper&#x2F;VolGroup-lv_home   45G  180M   43G   1% &#x2F;home</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br></pre></td></tr></table></figure><h3 id="步骤10-磁盘可用性验证"><a href="#步骤10-磁盘可用性验证" class="headerlink" title="步骤10 磁盘可用性验证"></a>步骤10 磁盘可用性验证</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"># 从远端共享服务器拷贝一个2GB左右的文件到新建磁盘，验证磁盘的可写性。</span><br><span class="line">[root@yjgltpc-cgzs-2 ~]# scp root@192.168.0.6:&#x2F;vms&#x2F;isos&#x2F;file.iso &#x2F;</span><br><span class="line">root@192.168.0.6&#39;s password:</span><br><span class="line">file.iso                                      100% 1997MB  48.7MB&#x2F;s   00:41    </span><br><span class="line">[root@yjgltpc-cgzs-2 ~]#</span><br><span class="line"> </span><br></pre></td></tr></table></figure><h2 id="问题"><a href="#问题" class="headerlink" title="问题"></a>问题</h2><h3 id="问题1："><a href="#问题1：" class="headerlink" title="问题1："></a>问题1：</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@yjgltpc-cgzs-2 log]# mkfs.ext4 &#x2F;dev&#x2F;vda5 </span><br><span class="line">mke2fs 1.42.9 (28-Dec-2013)</span><br><span class="line">Could not stat &#x2F;dev&#x2F;vda5  --- No such file or directory</span><br><span class="line"></span><br><span class="line">The device apparently does not exist; did you specify it correctly?</span><br><span class="line"></span><br><span class="line">解决方法：执行下partprobe 命令</span><br><span class="line"></span><br><span class="line">partprobe</span><br><span class="line"> </span><br><span class="line"></span><br><span class="line">       partprobe包含在parted的rpm软件包中。partprobe可以修改kernel中分区表，使kernel重新读取分区表。 因此，使用该命令就可以创建分区并且在不重新启动机器的情况下系统能够识别这些分区。</span><br></pre></td></tr></table></figure><h3 id="问题2-Couldn’t-create-temporary-archive-name"><a href="#问题2-Couldn’t-create-temporary-archive-name" class="headerlink" title="问题2: Couldn’t create temporary archive name."></a>问题2: Couldn’t create temporary archive name.</h3><p>原来是根分区满了，无法创建归档名称，至少需要1M的剩余空间才能操作。 所以必须先删除一些临时文件. 首先使用如下命令，查找根分区中大于1G的文件。</p><div class="note success flat"><p>占位</p></div><p>我的博客即将同步至腾讯云+社区，邀请大家一同入驻：<a href="https://cloud.tencent.com/developer/support-plan?invite_code=y982vd2u7c9k">https://cloud.tencent.com/developer/support-plan?invite_code=y982vd2u7c9k</a></p>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;1-1-Linux类型虚拟机磁盘扩容&quot;&gt;&lt;a href=&quot;#1-1-Linux类型虚拟机磁盘扩容&quot; class=&quot;headerlink&quot; title=&quot;1.1 Linux类型虚拟机磁盘扩容&quot;&gt;&lt;/a&gt;1.1 Linux类型虚拟机磁盘扩容&lt;/h2&gt;&lt;h3 id=&quot;步</summary>
      
    
    
    
    <category term="linux" scheme="https://imszz.com/categories/linux/"/>
    
    
    <category term="linux" scheme="https://imszz.com/tags/linux/"/>
    
    <category term="disk" scheme="https://imszz.com/tags/disk/"/>
    
  </entry>
  
  <entry>
    <title>k8s部署nacos-nfs版本</title>
    <link href="https://imszz.com/p/98ed58c9/"/>
    <id>https://imszz.com/p/98ed58c9/</id>
    <published>2021-04-06T06:00:25.000Z</published>
    <updated>2021-04-07T03:01:25.000Z</updated>
    
    <content type="html"><![CDATA[<p>官方给出了两种方式去搭建集权其中一种是快速搭建方式，另一种是集群搭建方式。<br>但是快速搭建的劣势是数据没有持久化，可能会出现数据集丢失的问题，一个集群，做到高可用，数据放入mysql数据库，才是生产环境必须要使用的方式。<br>可以使用自建已有mysql</p><p>即在这个k8s集群上搭建nacos集群。</p><h2 id="下载代码（代码中自带执行脚本的）"><a href="#下载代码（代码中自带执行脚本的）" class="headerlink" title="下载代码（代码中自带执行脚本的）"></a>下载代码（代码中自带执行脚本的）</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git clone https:&#x2F;&#x2F;github.com&#x2F;nacos-group&#x2F;nacos-k8s.git</span><br></pre></td></tr></table></figure><p>下载之后，上传代码到可执行服务器上。<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-1.png" alt="github--lena"></p><h2 id="部署-NFS"><a href="#部署-NFS" class="headerlink" title="部署 NFS"></a>部署 NFS</h2><h3 id="为什么要部署nfs呢？什么是nfs呢？"><a href="#为什么要部署nfs呢？什么是nfs呢？" class="headerlink" title="为什么要部署nfs呢？什么是nfs呢？"></a>为什么要部署nfs呢？什么是nfs呢？</h3><p>在高级使用中,Nacos在K8S拥有自动扩容缩容和数据持久特性,请注意如果需要使用这部分功能请使用PVC持久卷,Nacos的自动扩容缩容需要依赖持久卷,以及数据持久化也是一样,本例中使用的是NFS来使用PVC。也就是说nacos是有状态服务，需要持久化磁盘存储数据。<br>NFS:Network File System(NFS),网络文件系统,存储数据的硬盘。</p><h3 id="这个nfs服务部署在哪里？"><a href="#这个nfs服务部署在哪里？" class="headerlink" title="这个nfs服务部署在哪里？"></a>这个nfs服务部署在哪里？</h3><p>可以部署在这样一台机器上，可以和上面的k8s集群通讯，这里选择ip:61作为nfs的部署服务，你也可以选在ip:100等，只要能通就可以。</p><h3 id="安装nfs"><a href="#安装nfs" class="headerlink" title="安装nfs"></a>安装nfs</h3><h4 id="确认是否安装nfs"><a href="#确认是否安装nfs" class="headerlink" title="确认是否安装nfs"></a>确认是否安装nfs</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@master-01 nacos-k8s]# rpm -qa nfs-utils rpcbind</span><br><span class="line">nfs-utils-1.3.0-0.68.el7.x86_64</span><br><span class="line">rpcbind-0.2.0-49.el7.x86_64</span><br></pre></td></tr></table></figure><p>我的是已经安装过的，如果没有安装，请安装</p><h4 id="安装过程："><a href="#安装过程：" class="headerlink" title="安装过程："></a>安装过程：</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># 服务端 ip:61机器上</span><br><span class="line">$ yum install -y nfs-utils rpcbind</span><br><span class="line"></span><br><span class="line"># 客户端 其他台机器上均需要安装这个服务</span><br><span class="line">$ yum install -y nfs-utils</span><br></pre></td></tr></table></figure><h4 id="创建共享文件夹-data-nfs和-data-mysql，当然你可以自己选择位置"><a href="#创建共享文件夹-data-nfs和-data-mysql，当然你可以自己选择位置" class="headerlink" title="创建共享文件夹/data/nfs和/data/mysql，当然你可以自己选择位置"></a>创建共享文件夹/data/nfs和/data/mysql，当然你可以自己选择位置</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">cd &#x2F;data</span><br><span class="line">mkdir nfs</span><br><span class="line">mkdir mysql</span><br></pre></td></tr></table></figure><ul><li>nfs:为nfs共享文件的目录</li><li>mysql：为mysql数据文件目录</li></ul><h4 id="配置-etc-exports文件，在此文件中写入如下内容"><a href="#配置-etc-exports文件，在此文件中写入如下内容" class="headerlink" title="配置 /etc/exports文件，在此文件中写入如下内容"></a>配置 /etc/exports文件，在此文件中写入如下内容</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">&#x2F;data&#x2F;nfs *(insecure,rw,async,no_root_squash)</span><br><span class="line">&#x2F;data&#x2F;mysql *(insecure,rw,async,no_root_squash)</span><br></pre></td></tr></table></figure><p>配置完成后需要时期生效：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">exportfs -r</span><br></pre></td></tr></table></figure><p>具体含义如下：</p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-2.png" alt="github--lena"></p><h4 id="启动-RPC-服务"><a href="#启动-RPC-服务" class="headerlink" title="启动 RPC 服务"></a>启动 RPC 服务</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">service rpcbind start</span><br></pre></td></tr></table></figure><h4 id="查看-NFS-服务项-rpc-服务器注册的端口列表"><a href="#查看-NFS-服务项-rpc-服务器注册的端口列表" class="headerlink" title="查看 NFS 服务项 rpc 服务器注册的端口列表"></a>查看 NFS 服务项 rpc 服务器注册的端口列表</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rpcinfo -p localhost</span><br></pre></td></tr></table></figure><p>由于已经有其他服务，所有看到的多：</p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-3.png" alt="github--lena"></p><h4 id="启动-NFS-服务"><a href="#启动-NFS-服务" class="headerlink" title="启动 NFS 服务"></a>启动 NFS 服务</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">service nfs start</span><br></pre></td></tr></table></figure><h4 id="查看是否加载了-etc-exports中的配置："><a href="#查看是否加载了-etc-exports中的配置：" class="headerlink" title="查看是否加载了/etc/exports中的配置："></a>查看是否加载了/etc/exports中的配置：</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">showmount -e localhost</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-4.png" alt="github--lena"></p><p>至此nfs部署完成</p><h2 id="部署-NFS剩下部分"><a href="#部署-NFS剩下部分" class="headerlink" title="部署 NFS剩下部分"></a>部署 NFS剩下部分</h2><h4 id="创建角色"><a href="#创建角色" class="headerlink" title="创建角色"></a>创建角色</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create -f deploy&#x2F;nfs&#x2F;rbac.yaml</span><br></pre></td></tr></table></figure><p>如果的K8S命名空间不是default,请在部署RBAC之前执行以下脚本（就不要执行上面的脚本了或者手动修改yaml文件内所属 <code>namespace</code>:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"># Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed</span><br><span class="line">NS&#x3D;$(kubectl config get-contexts|grep -e &quot;^\*&quot; |awk &#39;&#123;print $5&#125;&#39;)</span><br><span class="line">NAMESPACE&#x3D;$&#123;NS:-default&#125;</span><br><span class="line">sed -i&#39;&#39; &quot;s&#x2F;namespace:.*&#x2F;namespace: $NAMESPACE&#x2F;g&quot; .&#x2F;deploy&#x2F;nfs&#x2F;rbac.yaml</span><br></pre></td></tr></table></figure><h4 id="创建-ServiceAccount-和部署-NFS-Client-Provisioner"><a href="#创建-ServiceAccount-和部署-NFS-Client-Provisioner" class="headerlink" title="创建 ServiceAccount 和部署 NFS-Client Provisioner"></a>创建 ServiceAccount 和部署 NFS-Client Provisioner</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create -f deploy&#x2F;nfs&#x2F;deployment.yaml</span><br></pre></td></tr></table></figure><p>内容如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: ServiceAccount</span><br><span class="line">metadata:</span><br><span class="line">  name: nfs-client-provisioner</span><br><span class="line">---</span><br><span class="line">kind: Deployment</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">metadata:</span><br><span class="line">  name: nfs-client-provisioner</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  strategy:</span><br><span class="line">    type: Recreate</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: nfs-client-provisioner</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: nfs-client-provisioner</span><br><span class="line">    spec:</span><br><span class="line">      serviceAccount: nfs-client-provisioner</span><br><span class="line">      containers:</span><br><span class="line">        - name: nfs-client-provisioner</span><br><span class="line">          image: quay.io&#x2F;external_storage&#x2F;nfs-client-provisioner:latest</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: nfs-client-root</span><br><span class="line">              mountPath: &#x2F;persistentvolumes</span><br><span class="line">          env:</span><br><span class="line">            - name: PROVISIONER_NAME</span><br><span class="line">              value: fuseim.pri&#x2F;ifs</span><br><span class="line">            - name: NFS_SERVER</span><br><span class="line">              value: 10.1.33.61 </span><br><span class="line">            - name: NFS_PATH</span><br><span class="line">              value: &#x2F;data&#x2F;nfs</span><br><span class="line">      volumes:</span><br><span class="line">        - name: nfs-client-root</span><br><span class="line">          nfs:</span><br><span class="line">            server: 10.1.33.61</span><br><span class="line">            path: &#x2F;data&#x2F;nfs</span><br></pre></td></tr></table></figure><h4 id="创建-NFS-StorageClass"><a href="#创建-NFS-StorageClass" class="headerlink" title="创建 NFS StorageClass"></a>创建 NFS StorageClass</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create -f deploy&#x2F;nfs&#x2F;class.yaml</span><br></pre></td></tr></table></figure><h4 id="验证NFS部署成功"><a href="#验证NFS部署成功" class="headerlink" title="验证NFS部署成功"></a>验证NFS部署成功</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod -l app&#x3D;nfs-client-provisioner</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-5.png" alt="github--lena"></p><h2 id="部署数据库"><a href="#部署数据库" class="headerlink" title="部署数据库"></a>部署数据库</h2><p>这个数据库就是记录nacos配置的数据库，做到持久化，就能保证安全了。</p><h4 id="安装数据库"><a href="#安装数据库" class="headerlink" title="安装数据库"></a>安装数据库</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create -f deploy&#x2F;mysql&#x2F;mysql-nfs.yaml</span><br></pre></td></tr></table></figure><p>代码，如下，有修改哦：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: ReplicationController</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql</span><br><span class="line">  labels:</span><br><span class="line">    name: mysql</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    name: mysql</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        name: mysql</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: mysql</span><br><span class="line">        image: nacos&#x2F;nacos-mysql:5.7 </span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 3306</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: mysql-data</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;mysql </span><br><span class="line">        env:</span><br><span class="line">        - name: MYSQL_ROOT_PASSWORD</span><br><span class="line">          value: &quot;root&quot;</span><br><span class="line">        - name: MYSQL_DATABASE</span><br><span class="line">          value: &quot;nacos_config&quot;</span><br><span class="line">        - name: MYSQL_USER</span><br><span class="line">          value: &quot;nacos&quot;</span><br><span class="line">        - name: MYSQL_PASSWORD</span><br><span class="line">          value: &quot;nacos&quot;</span><br><span class="line">      volumes:</span><br><span class="line">      - name: mysql-data</span><br><span class="line">        nfs:</span><br><span class="line">          server: 10.1.33.61 </span><br><span class="line">          path: &#x2F;data&#x2F;mysql</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql</span><br><span class="line">  labels:</span><br><span class="line">    name: mysql</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - port: 3306</span><br><span class="line">    targetPort: 3306</span><br><span class="line">  selector:</span><br><span class="line">    name: mysql</span><br></pre></td></tr></table></figure><h4 id="验证数据库是否安装成功"><a href="#验证数据库是否安装成功" class="headerlink" title="验证数据库是否安装成功"></a>验证数据库是否安装成功</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-6.png" alt="github--lena"></p><h4 id="建表"><a href="#建表" class="headerlink" title="建表"></a>建表</h4><p>数据库初始化语句位置</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">https:&#x2F;&#x2F;github.com&#x2F;alibaba&#x2F;nacos&#x2F;blob&#x2F;develop&#x2F;distribution&#x2F;conf&#x2F;nacos-mysql.sql</span><br></pre></td></tr></table></figure><p>如果库中没有这些表需要自己创建。默认是创建完成，自建数据库可以导入使用</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br><span class="line">200</span><br><span class="line">201</span><br><span class="line">202</span><br><span class="line">203</span><br><span class="line">204</span><br><span class="line">205</span><br><span class="line">206</span><br><span class="line">207</span><br><span class="line">208</span><br><span class="line">209</span><br><span class="line">210</span><br><span class="line">211</span><br><span class="line">212</span><br><span class="line">213</span><br><span class="line">214</span><br><span class="line">215</span><br><span class="line">216</span><br><span class="line">217</span><br><span class="line">218</span><br><span class="line">219</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line">&#x2F;*</span><br><span class="line"> * Copyright 1999-2018 Alibaba Group Holding Ltd.</span><br><span class="line"> *</span><br><span class="line"> * Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);</span><br><span class="line"> * you may not use this file except in compliance with the License.</span><br><span class="line"> * You may obtain a copy of the License at</span><br><span class="line"> *</span><br><span class="line"> *      http:&#x2F;&#x2F;www.apache.org&#x2F;licenses&#x2F;LICENSE-2.0</span><br><span class="line"> *</span><br><span class="line"> * Unless required by applicable law or agreed to in writing, software</span><br><span class="line"> * distributed under the License is distributed on an &quot;AS IS&quot; BASIS,</span><br><span class="line"> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.</span><br><span class="line"> * See the License for the specific language governing permissions and</span><br><span class="line"> * limitations under the License.</span><br><span class="line"> *&#x2F;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; config_info   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;config_info&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL AUTO_INCREMENT COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL COMMENT &#39;data_id&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(255) DEFAULT NULL,</span><br><span class="line">  &#96;content&#96; longtext NOT NULL COMMENT &#39;content&#39;,</span><br><span class="line">  &#96;md5&#96; varchar(32) DEFAULT NULL COMMENT &#39;md5&#39;,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;修改时间&#39;,</span><br><span class="line">  &#96;src_user&#96; text COMMENT &#39;source user&#39;,</span><br><span class="line">  &#96;src_ip&#96; varchar(50) DEFAULT NULL COMMENT &#39;source ip&#39;,</span><br><span class="line">  &#96;app_name&#96; varchar(128) DEFAULT NULL,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;租户字段&#39;,</span><br><span class="line">  &#96;c_desc&#96; varchar(256) DEFAULT NULL,</span><br><span class="line">  &#96;c_use&#96; varchar(64) DEFAULT NULL,</span><br><span class="line">  &#96;effect&#96; varchar(64) DEFAULT NULL,</span><br><span class="line">  &#96;type&#96; varchar(64) DEFAULT NULL,</span><br><span class="line">  &#96;c_schema&#96; text,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_configinfo_datagrouptenant&#96; (&#96;data_id&#96;,&#96;group_id&#96;,&#96;tenant_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;config_info&#39;;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; config_info_aggr   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;config_info_aggr&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL AUTO_INCREMENT COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL COMMENT &#39;data_id&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(255) NOT NULL COMMENT &#39;group_id&#39;,</span><br><span class="line">  &#96;datum_id&#96; varchar(255) NOT NULL COMMENT &#39;datum_id&#39;,</span><br><span class="line">  &#96;content&#96; longtext NOT NULL COMMENT &#39;内容&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL COMMENT &#39;修改时间&#39;,</span><br><span class="line">  &#96;app_name&#96; varchar(128) DEFAULT NULL,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;租户字段&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_configinfoaggr_datagrouptenantdatum&#96; (&#96;data_id&#96;,&#96;group_id&#96;,&#96;tenant_id&#96;,&#96;datum_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;增加租户字段&#39;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; config_info_beta   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;config_info_beta&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL AUTO_INCREMENT COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL COMMENT &#39;data_id&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(128) NOT NULL COMMENT &#39;group_id&#39;,</span><br><span class="line">  &#96;app_name&#96; varchar(128) DEFAULT NULL COMMENT &#39;app_name&#39;,</span><br><span class="line">  &#96;content&#96; longtext NOT NULL COMMENT &#39;content&#39;,</span><br><span class="line">  &#96;beta_ips&#96; varchar(1024) DEFAULT NULL COMMENT &#39;betaIps&#39;,</span><br><span class="line">  &#96;md5&#96; varchar(32) DEFAULT NULL COMMENT &#39;md5&#39;,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;修改时间&#39;,</span><br><span class="line">  &#96;src_user&#96; text COMMENT &#39;source user&#39;,</span><br><span class="line">  &#96;src_ip&#96; varchar(50) DEFAULT NULL COMMENT &#39;source ip&#39;,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;租户字段&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_configinfobeta_datagrouptenant&#96; (&#96;data_id&#96;,&#96;group_id&#96;,&#96;tenant_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;config_info_beta&#39;;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; config_info_tag   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;config_info_tag&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL AUTO_INCREMENT COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL COMMENT &#39;data_id&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(128) NOT NULL COMMENT &#39;group_id&#39;,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;tenant_id&#39;,</span><br><span class="line">  &#96;tag_id&#96; varchar(128) NOT NULL COMMENT &#39;tag_id&#39;,</span><br><span class="line">  &#96;app_name&#96; varchar(128) DEFAULT NULL COMMENT &#39;app_name&#39;,</span><br><span class="line">  &#96;content&#96; longtext NOT NULL COMMENT &#39;content&#39;,</span><br><span class="line">  &#96;md5&#96; varchar(32) DEFAULT NULL COMMENT &#39;md5&#39;,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;修改时间&#39;,</span><br><span class="line">  &#96;src_user&#96; text COMMENT &#39;source user&#39;,</span><br><span class="line">  &#96;src_ip&#96; varchar(50) DEFAULT NULL COMMENT &#39;source ip&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_configinfotag_datagrouptenanttag&#96; (&#96;data_id&#96;,&#96;group_id&#96;,&#96;tenant_id&#96;,&#96;tag_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;config_info_tag&#39;;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; config_tags_relation   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;config_tags_relation&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;tag_name&#96; varchar(128) NOT NULL COMMENT &#39;tag_name&#39;,</span><br><span class="line">  &#96;tag_type&#96; varchar(64) DEFAULT NULL COMMENT &#39;tag_type&#39;,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL COMMENT &#39;data_id&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(128) NOT NULL COMMENT &#39;group_id&#39;,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;tenant_id&#39;,</span><br><span class="line">  &#96;nid&#96; bigint(20) NOT NULL AUTO_INCREMENT,</span><br><span class="line">  PRIMARY KEY (&#96;nid&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_configtagrelation_configidtag&#96; (&#96;id&#96;,&#96;tag_name&#96;,&#96;tag_type&#96;),</span><br><span class="line">  KEY &#96;idx_tenant_id&#96; (&#96;tenant_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;config_tag_relation&#39;;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; group_capacity   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;group_capacity&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT &#39;主键ID&#39;,</span><br><span class="line">  &#96;group_id&#96; varchar(128) NOT NULL DEFAULT &#39;&#39; COMMENT &#39;Group ID，空字符表示整个集群&#39;,</span><br><span class="line">  &#96;quota&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;配额，0表示使用默认值&#39;,</span><br><span class="line">  &#96;usage&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;使用量&#39;,</span><br><span class="line">  &#96;max_size&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;单个配置大小上限，单位为字节，0表示使用默认值&#39;,</span><br><span class="line">  &#96;max_aggr_count&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;聚合子配置最大个数，，0表示使用默认值&#39;,</span><br><span class="line">  &#96;max_aggr_size&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;单个聚合数据的子配置大小上限，单位为字节，0表示使用默认值&#39;,</span><br><span class="line">  &#96;max_history_count&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;最大变更历史数量&#39;,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;修改时间&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_group_id&#96; (&#96;group_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;集群、各Group容量信息表&#39;;</span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; his_config_info   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;his_config_info&#96; (</span><br><span class="line">  &#96;id&#96; bigint(64) unsigned NOT NULL,</span><br><span class="line">  &#96;nid&#96; bigint(20) unsigned NOT NULL AUTO_INCREMENT,</span><br><span class="line">  &#96;data_id&#96; varchar(255) NOT NULL,</span><br><span class="line">  &#96;group_id&#96; varchar(128) NOT NULL,</span><br><span class="line">  &#96;app_name&#96; varchar(128) DEFAULT NULL COMMENT &#39;app_name&#39;,</span><br><span class="line">  &#96;content&#96; longtext NOT NULL,</span><br><span class="line">  &#96;md5&#96; varchar(32) DEFAULT NULL,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,</span><br><span class="line">  &#96;src_user&#96; text,</span><br><span class="line">  &#96;src_ip&#96; varchar(50) DEFAULT NULL,</span><br><span class="line">  &#96;op_type&#96; char(10) DEFAULT NULL,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) DEFAULT &#39;&#39; COMMENT &#39;租户字段&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;nid&#96;),</span><br><span class="line">  KEY &#96;idx_gmt_create&#96; (&#96;gmt_create&#96;),</span><br><span class="line">  KEY &#96;idx_gmt_modified&#96; (&#96;gmt_modified&#96;),</span><br><span class="line">  KEY &#96;idx_did&#96; (&#96;data_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;多租户改造&#39;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">&#x2F;*   数据库全名 &#x3D; nacos_config   *&#x2F;</span><br><span class="line">&#x2F;*   表名称 &#x3D; tenant_capacity   *&#x2F;</span><br><span class="line">&#x2F;******************************************&#x2F;</span><br><span class="line">CREATE TABLE &#96;tenant_capacity&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT &#39;主键ID&#39;,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) NOT NULL DEFAULT &#39;&#39; COMMENT &#39;Tenant ID&#39;,</span><br><span class="line">  &#96;quota&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;配额，0表示使用默认值&#39;,</span><br><span class="line">  &#96;usage&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;使用量&#39;,</span><br><span class="line">  &#96;max_size&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;单个配置大小上限，单位为字节，0表示使用默认值&#39;,</span><br><span class="line">  &#96;max_aggr_count&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;聚合子配置最大个数&#39;,</span><br><span class="line">  &#96;max_aggr_size&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;单个聚合数据的子配置大小上限，单位为字节，0表示使用默认值&#39;,</span><br><span class="line">  &#96;max_history_count&#96; int(10) unsigned NOT NULL DEFAULT &#39;0&#39; COMMENT &#39;最大变更历史数量&#39;,</span><br><span class="line">  &#96;gmt_create&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT &#39;修改时间&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_tenant_id&#96; (&#96;tenant_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;租户容量信息表&#39;;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">CREATE TABLE &#96;tenant_info&#96; (</span><br><span class="line">  &#96;id&#96; bigint(20) NOT NULL AUTO_INCREMENT COMMENT &#39;id&#39;,</span><br><span class="line">  &#96;kp&#96; varchar(128) NOT NULL COMMENT &#39;kp&#39;,</span><br><span class="line">  &#96;tenant_id&#96; varchar(128) default &#39;&#39; COMMENT &#39;tenant_id&#39;,</span><br><span class="line">  &#96;tenant_name&#96; varchar(128) default &#39;&#39; COMMENT &#39;tenant_name&#39;,</span><br><span class="line">  &#96;tenant_desc&#96; varchar(256) DEFAULT NULL COMMENT &#39;tenant_desc&#39;,</span><br><span class="line">  &#96;create_source&#96; varchar(32) DEFAULT NULL COMMENT &#39;create_source&#39;,</span><br><span class="line">  &#96;gmt_create&#96; bigint(20) NOT NULL COMMENT &#39;创建时间&#39;,</span><br><span class="line">  &#96;gmt_modified&#96; bigint(20) NOT NULL COMMENT &#39;修改时间&#39;,</span><br><span class="line">  PRIMARY KEY (&#96;id&#96;),</span><br><span class="line">  UNIQUE KEY &#96;uk_tenant_info_kptenantid&#96; (&#96;kp&#96;,&#96;tenant_id&#96;),</span><br><span class="line">  KEY &#96;idx_tenant_id&#96; (&#96;tenant_id&#96;)</span><br><span class="line">) ENGINE&#x3D;InnoDB DEFAULT CHARSET&#x3D;utf8 COLLATE&#x3D;utf8_bin COMMENT&#x3D;&#39;tenant_info&#39;;</span><br><span class="line"></span><br><span class="line">CREATE TABLE &#96;users&#96; (</span><br><span class="line">&#96;username&#96; varchar(50) NOT NULL PRIMARY KEY,</span><br><span class="line">&#96;password&#96; varchar(500) NOT NULL,</span><br><span class="line">&#96;enabled&#96; boolean NOT NULL</span><br><span class="line">);</span><br><span class="line"></span><br><span class="line">CREATE TABLE &#96;roles&#96; (</span><br><span class="line">&#96;username&#96; varchar(50) NOT NULL,</span><br><span class="line">&#96;role&#96; varchar(50) NOT NULL,</span><br><span class="line">UNIQUE INDEX &#96;idx_user_role&#96; (&#96;username&#96; ASC, &#96;role&#96; ASC) USING BTREE</span><br><span class="line">);</span><br><span class="line"></span><br><span class="line">CREATE TABLE &#96;permissions&#96; (</span><br><span class="line">    &#96;role&#96; varchar(50) NOT NULL,</span><br><span class="line">    &#96;resource&#96; varchar(255) NOT NULL,</span><br><span class="line">    &#96;action&#96; varchar(8) NOT NULL,</span><br><span class="line">    UNIQUE INDEX &#96;uk_role_permission&#96; (&#96;role&#96;,&#96;resource&#96;,&#96;action&#96;) USING BTREE</span><br><span class="line">);</span><br><span class="line"></span><br><span class="line">INSERT INTO users (username, password, enabled) VALUES (&#39;nacos&#39;, &#39;$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu&#39;, TRUE);</span><br><span class="line"></span><br><span class="line">INSERT INTO roles (username, role) VALUES (&#39;nacos&#39;, &#39;ROLE_ADMIN&#39;);</span><br></pre></td></tr></table></figure><h2 id="部署Nacos"><a href="#部署Nacos" class="headerlink" title="部署Nacos"></a>部署Nacos</h2><h4 id="修改depoly-nacos-nacos-pvc-nfs-yaml"><a href="#修改depoly-nacos-nacos-pvc-nfs-yaml" class="headerlink" title="修改depoly/nacos/nacos-pvc-nfs.yaml"></a>修改depoly/nacos/nacos-pvc-nfs.yaml</h4><p>先给出修改后的代码：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br></pre></td><td class="code"><pre><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: nacos-headless</span><br><span class="line">  labels:</span><br><span class="line">    app: nacos</span><br><span class="line">  annotations:</span><br><span class="line">    service.alpha.kubernetes.io&#x2F;tolerate-unready-endpoints: &quot;true&quot;</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">    - port: 8848</span><br><span class="line">      name: server</span><br><span class="line">      targetPort: 8848</span><br><span class="line">    - port: 7848</span><br><span class="line">      name: rpc</span><br><span class="line">      targetPort: 7848</span><br><span class="line">  clusterIP: None</span><br><span class="line">  selector:</span><br><span class="line">    app: nacos</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: ConfigMap</span><br><span class="line">metadata:</span><br><span class="line">  name: nacos-cm</span><br><span class="line">data:</span><br><span class="line">  mysql.db.name: &quot;nacos_config&quot;</span><br><span class="line">  mysql.port: &quot;3306&quot;</span><br><span class="line">  mysql.user: &quot;nacos&quot;</span><br><span class="line">  mysql.password: &quot;nacos&quot;</span><br><span class="line">---</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: StatefulSet</span><br><span class="line">metadata:</span><br><span class="line">  name: nacos</span><br><span class="line">spec:</span><br><span class="line">  serviceName: nacos-headless</span><br><span class="line">  replicas: 3</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: nacos</span><br><span class="line">      annotations:</span><br><span class="line">        pod.alpha.kubernetes.io&#x2F;initialized: &quot;true&quot;</span><br><span class="line">    spec:</span><br><span class="line">      affinity:</span><br><span class="line">        podAntiAffinity:</span><br><span class="line">          requiredDuringSchedulingIgnoredDuringExecution:</span><br><span class="line">            - labelSelector:</span><br><span class="line">                matchExpressions:</span><br><span class="line">                  - key: &quot;app&quot;</span><br><span class="line">                    operator: In</span><br><span class="line">                    values:</span><br><span class="line">                      - nacos</span><br><span class="line">              topologyKey: &quot;kubernetes.io&#x2F;hostname&quot;</span><br><span class="line">      serviceAccountName: nfs-client-provisioner</span><br><span class="line">      initContainers:</span><br><span class="line">        - name: peer-finder-plugin-install</span><br><span class="line">          image: nacos&#x2F;nacos-peer-finder-plugin:1.0</span><br><span class="line">          imagePullPolicy: Always</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - mountPath: &quot;&#x2F;home&#x2F;nacos&#x2F;plugins&#x2F;peer-finder&quot;</span><br><span class="line">              name: plugindir</span><br><span class="line">      containers:</span><br><span class="line">        - name: nacos</span><br><span class="line">          imagePullPolicy: Always</span><br><span class="line">          image: nacos&#x2F;nacos-server:latest </span><br><span class="line">          resources:</span><br><span class="line">            requests:</span><br><span class="line">              memory: &quot;2Gi&quot;</span><br><span class="line">              cpu: &quot;500m&quot;</span><br><span class="line">          ports:</span><br><span class="line">            - containerPort: 8848</span><br><span class="line">              name: client-port</span><br><span class="line">            - containerPort: 7848</span><br><span class="line">              name: rpc</span><br><span class="line">          env:</span><br><span class="line">            - name: NACOS_REPLICAS</span><br><span class="line">              value: &quot;3&quot;</span><br><span class="line">            - name: SERVICE_NAME</span><br><span class="line">              value: &quot;nacos-headless&quot;</span><br><span class="line">            - name: DOMAIN_NAME</span><br><span class="line">              value: &quot;cluster.local&quot;</span><br><span class="line">            - name: POD_NAMESPACE</span><br><span class="line">              valueFrom:</span><br><span class="line">                fieldRef:</span><br><span class="line">                  apiVersion: v1</span><br><span class="line">                  fieldPath: metadata.namespace</span><br><span class="line">            - name: MYSQL_SERVICE_DB_NAME</span><br><span class="line">              valueFrom:</span><br><span class="line">                configMapKeyRef:</span><br><span class="line">                  name: nacos-cm</span><br><span class="line">                  key: mysql.db.name</span><br><span class="line">            - name: MYSQL_SERVICE_PORT</span><br><span class="line">              valueFrom:</span><br><span class="line">                configMapKeyRef:</span><br><span class="line">                  name: nacos-cm</span><br><span class="line">                  key: mysql.port</span><br><span class="line">            - name: MYSQL_SERVICE_USER</span><br><span class="line">              valueFrom:</span><br><span class="line">                configMapKeyRef:</span><br><span class="line">                  name: nacos-cm</span><br><span class="line">                  key: mysql.user</span><br><span class="line">            - name: MYSQL_SERVICE_PASSWORD</span><br><span class="line">              valueFrom:</span><br><span class="line">                configMapKeyRef:</span><br><span class="line">                  name: nacos-cm</span><br><span class="line">                  key: mysql.password</span><br><span class="line">            - name: NACOS_SERVER_PORT</span><br><span class="line">              value: &quot;8848&quot;</span><br><span class="line">            - name: NACOS_APPLICATION_PORT</span><br><span class="line">              value: &quot;8848&quot;</span><br><span class="line">            - name: PREFER_HOST_MODE</span><br><span class="line">              value: &quot;hostname&quot;</span><br><span class="line">          volumeMounts:</span><br><span class="line">            - name: plugindir</span><br><span class="line">              mountPath: &#x2F;home&#x2F;nacos&#x2F;plugins&#x2F;peer-finder</span><br><span class="line">            - name: datadir</span><br><span class="line">              mountPath: &#x2F;home&#x2F;nacos&#x2F;data</span><br><span class="line">            - name: logdir</span><br><span class="line">              mountPath: &#x2F;home&#x2F;nacos&#x2F;logs</span><br><span class="line">  volumeClaimTemplates:</span><br><span class="line">    - metadata:</span><br><span class="line">        name: plugindir</span><br><span class="line">        annotations:</span><br><span class="line">          volume.beta.kubernetes.io&#x2F;storage-class: &quot;managed-nfs-storage&quot;</span><br><span class="line">      spec:</span><br><span class="line">        accessModes: [ &quot;ReadWriteMany&quot; ]</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            storage: 5Gi</span><br><span class="line">    - metadata:</span><br><span class="line">        name: datadir</span><br><span class="line">        annotations:</span><br><span class="line">          volume.beta.kubernetes.io&#x2F;storage-class: &quot;managed-nfs-storage&quot;</span><br><span class="line">      spec:</span><br><span class="line">        accessModes: [ &quot;ReadWriteMany&quot; ]</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            storage: 5Gi</span><br><span class="line">    - metadata:</span><br><span class="line">        name: logdir</span><br><span class="line">        annotations:</span><br><span class="line">          volume.beta.kubernetes.io&#x2F;storage-class: &quot;managed-nfs-storage&quot;</span><br><span class="line">      spec:</span><br><span class="line">        accessModes: [ &quot;ReadWriteMany&quot; ]</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            storage: 5Gi</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: nacos</span><br></pre></td></tr></table></figure><h4 id="创建nacos"><a href="#创建nacos" class="headerlink" title="创建nacos"></a>创建nacos</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create -f nacos-k8s&#x2F;deploy&#x2F;nacos&#x2F;nacos-pvc-nfs.yaml</span><br></pre></td></tr></table></figure><h4 id="验证Nacos节点启动成功"><a href="#验证Nacos节点启动成功" class="headerlink" title="验证Nacos节点启动成功"></a>验证Nacos节点启动成功</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod -l app&#x3D;nacos</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-7.png" alt="github--lena"></p><h2 id="页面访问"><a href="#页面访问" class="headerlink" title="页面访问"></a>页面访问</h2><p>查看nacos服务对外暴露的端口</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get svc -o wide</span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/k8s/naocs-8.png" alt="github--lena"></p><h4 id="为nacos创建ingres代理"><a href="#为nacos创建ingres代理" class="headerlink" title="为nacos创建ingres代理"></a>为nacos创建ingres代理</h4><p>nacos-ingress.yaml</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: nacos-ingress</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: www.nacos.com </span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - path: &#x2F;nacos</span><br><span class="line">        backend:</span><br><span class="line">          serviceName: nacos-headless </span><br><span class="line">          servicePort: 8848 </span><br></pre></td></tr></table></figure><p>执行下面命令就可以执行成功了</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f nacos-ingress.yaml </span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="如果需要刪除后重建则特别注意"><a href="#如果需要刪除后重建则特别注意" class="headerlink" title="如果需要刪除后重建则特别注意"></a>如果需要刪除后重建则特别注意</h2><h4 id="pv删除"><a href="#pv删除" class="headerlink" title="pv删除"></a>pv删除</h4><p>通过k8s 图形界面存储中删除，删除后红色中横线，并没有删除！</p><h4 id="使用命令查看是否删除完成"><a href="#使用命令查看是否删除完成" class="headerlink" title="使用命令查看是否删除完成"></a>使用命令查看是否删除完成</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pv </span><br></pre></td></tr></table></figure><h4 id="使用命令删除重新创建即可"><a href="#使用命令删除重新创建即可" class="headerlink" title="使用命令删除重新创建即可"></a>使用命令删除重新创建即可</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br><span class="line">kubectl patch pv pvc-122b45c0-78fb-4185-9a29-4b2f023ba25e  -p &#39;&#123;&quot;metadata&quot;:&#123;&quot;finalizers&quot;:null&#125;&#125;&#39;</span><br></pre></td></tr></table></figure><h4 id="创建的过程"><a href="#创建的过程" class="headerlink" title="创建的过程"></a>创建的过程</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line">1.先删除</span><br><span class="line">kubectl delete  -f deploy&#x2F;nfs&#x2F;class.yaml</span><br><span class="line"></span><br><span class="line">2.创建</span><br><span class="line">kubectl create  -f deploy&#x2F;nfs&#x2F;class.yaml</span><br><span class="line"></span><br><span class="line">3. 删除ServiceAccount 和部署 NFS-Client Provisioner</span><br><span class="line">kubectl delete -f deploy&#x2F;nfs&#x2F;deployment.yaml</span><br><span class="line"></span><br><span class="line">4. 创建ServiceAccount 和部署 NFS-Client Provisioner</span><br><span class="line">kubectl create -f deploy&#x2F;nfs&#x2F;deployment.yaml</span><br><span class="line"></span><br><span class="line">5. 验证NFS部署成功 </span><br><span class="line">kubectl get pod -l app&#x3D;nfs-client-provisioner</span><br><span class="line"></span><br><span class="line">6. 删除数据库</span><br><span class="line">kubectl delete -f deploy&#x2F;mysql&#x2F;mysql-nfs.yaml</span><br><span class="line"></span><br><span class="line">7.安装数据库</span><br><span class="line">kubectl create -f deploy&#x2F;mysql&#x2F;mysql-nfs.yaml</span><br><span class="line"></span><br><span class="line">8. 验证是否成功</span><br><span class="line">kubectl get pod</span><br><span class="line"></span><br><span class="line">9. 建表</span><br><span class="line">进入shell创建</span><br><span class="line"></span><br><span class="line">10.删除nacos集群</span><br><span class="line">kubectl delete -f deploy&#x2F;nacos&#x2F;nacos-pvc-nfs.yaml</span><br><span class="line"></span><br><span class="line">11.创建nacos集群</span><br><span class="line">kubectl create -f deploy&#x2F;nacos&#x2F;nacos-pvc-nfs.yaml</span><br></pre></td></tr></table></figure><blockquote><p>参考：<a href="https://blog.csdn.net/fsjwin/article/details/110503029">https://blog.csdn.net/fsjwin/article/details/110503029</a><br><a href="https://nacos.io/zh-cn/docs/use-nacos-with-kubernetes.html">https://nacos.io/zh-cn/docs/use-nacos-with-kubernetes.html</a></p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;官方给出了两种方式去搭建集权其中一种是快速搭建方式，另一种是集群搭建方式。&lt;br&gt;但是快速搭建的劣势是数据没有持久化，可能会出现数据集丢失的问题，一个集群，做到高可用，数据放入mysql数据库，才是生产环境必须要使用的方式。&lt;br&gt;可以使用自建已有mysql&lt;/p&gt;
&lt;p&gt;</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="nacos" scheme="https://imszz.com/tags/nacos/"/>
    
  </entry>
  
  <entry>
    <title>K8S 部署 Statefulset mysql</title>
    <link href="https://imszz.com/p/25e4d739/"/>
    <id>https://imszz.com/p/25e4d739/</id>
    <published>2021-04-01T06:00:25.000Z</published>
    <updated>2021-04-06T06:01:25.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="Statefulset-MySQL"><a href="#Statefulset-MySQL" class="headerlink" title="Statefulset MySQL"></a>Statefulset MySQL</h2><p>此例是多副本的 MySQL 数据库。<br>示例应用的拓扑结构有一个主服务器和多个副本，使用异步的基于行（Row-Based）的数据复制。</p><div class="note warning flat"><p>说明： 这不是生产环境下配置。 尤其注意，MySQL 设置都使用的是不安全的默认值，这是因为我们想把重点放在 Kubernetes 中运行有状态应用程序的一般模式上。</p></div><h2 id="创建存储卷"><a href="#创建存储卷" class="headerlink" title="创建存储卷"></a>创建存储卷</h2><p>集群需要用到存储，准备持久卷（PersistentVolume，简称PV），我这里以yaml文件创建3个PV。如后续伸缩需要更新PersistentVolume 配置</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br></pre></td><td class="code"><pre><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-my1</span><br><span class="line">  labels:</span><br><span class="line">    type: mysql</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 20Gi</span><br><span class="line">  storageClassName: mysql</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;mysql&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br><span class="line">---</span><br><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-my2</span><br><span class="line">  labels:</span><br><span class="line">    type: mysql</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 20Gi</span><br><span class="line">  storageClassName: mysql</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;mysql&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br><span class="line">---</span><br><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-my3</span><br><span class="line">  labels:</span><br><span class="line">    type: mysql</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 20Gi</span><br><span class="line">  storageClassName: mysql</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;mysql&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br></pre></td></tr></table></figure><h3 id="部署及存储卷状态查询"><a href="#部署及存储卷状态查询" class="headerlink" title="部署及存储卷状态查询"></a>部署及存储卷状态查询</h3><blockquote><p>注意：如果是使用云服务提供的云盘，注意购买云盘要与node节点使用区一致， 还要注意 node 类型支持那些云盘类型</p></blockquote><p>这里发现pv和pvc还没有绑定状态是Available</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f persistent-volume.yaml</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pv  </span><br></pre></td></tr></table></figure><h2 id="部署-MySQL"><a href="#部署-MySQL" class="headerlink" title="部署 MySQL"></a>部署 MySQL</h2><p>MySQL 示例部署包含一个 ConfigMap、两个 Service 与一个 StatefulSet。</p><h3 id="ConfigMap"><a href="#ConfigMap" class="headerlink" title="ConfigMap"></a>ConfigMap</h3><p>使用以下的 YAML 配置文件创建 ConfigMap ：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: ConfigMap</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql</span><br><span class="line">  labels:</span><br><span class="line">    app: mysql</span><br><span class="line">data:</span><br><span class="line">  master.cnf: |</span><br><span class="line">    # Apply this config only on the master.</span><br><span class="line">    [mysqld]</span><br><span class="line">    log-bin    </span><br><span class="line">  slave.cnf: |</span><br><span class="line">    # Apply this config only on slaves.</span><br><span class="line">    [mysqld]</span><br><span class="line">    super-read-only    </span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f mysql-configmap.yaml </span><br></pre></td></tr></table></figure><p>这个 ConfigMap 提供 <code>my.cnf</code> 覆盖设置，使你可以独立控制 MySQL 主服务器和从服务器的配置。在这里，你希望主服务器能够将复制日志提供给副本服务器，并且希望副本服务器拒绝任何不是通过复制进行的写操作。</p><p>ConfigMap 本身没有什么特别之处，因而也不会出现不同部分应用于不同的 Pod 的情况。每个 Pod 都会在初始化时基于 StatefulSet 控制器提供的信息决定要查看的部分。</p><h3 id="服务"><a href="#服务" class="headerlink" title="服务"></a>服务</h3><p>使用以下 YAML 配置文件创建服务：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br></pre></td><td class="code"><pre><span class="line"># Headless service for stable DNS entries of StatefulSet members.</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql</span><br><span class="line">  labels:</span><br><span class="line">    app: mysql</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: mysql</span><br><span class="line">    port: 3306</span><br><span class="line">  clusterIP: None</span><br><span class="line">  selector:</span><br><span class="line">    app: mysql</span><br><span class="line">---</span><br><span class="line"># Client service for connecting to any MySQL instance for reads.</span><br><span class="line"># For writes, you must instead connect to the master: mysql-0.mysql.</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql-read</span><br><span class="line">  labels:</span><br><span class="line">    app: mysql</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - name: mysql</span><br><span class="line">    port: 3306</span><br><span class="line">  selector:</span><br><span class="line">    app: mysql</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f mysql-services.yaml</span><br></pre></td></tr></table></figure><p>这个无头服务给 StatefulSet 控制器为集合中每个 Pod 创建的 DNS 条目提供了一个宿主。因为服务名为 <code>mysql</code>，所以可以通过在同一 Kubernetes 集群和名字中的任何其他 Pod 内解析 <code>&lt;Pod 名称&gt;.mysql</code> 来访问 Pod。</p><p>客户端服务称为 <code>mysql-read</code>，是一种常规服务，具有其自己的集群 IP。该集群 IP 在报告就绪的所有MySQL Pod 之间分配连接。可能的端点集合包括 MySQL 主节点和所有副本节点。</p><p>请注意，只有读查询才能使用负载平衡的客户端服务。因为只有一个 MySQL 主服务器，所以客户端应直接连接到 MySQL 主服务器 Pod（通过其在无头服务中的 DNS 条目）以执行写入操作。</p><h3 id="StatefulSet"><a href="#StatefulSet" class="headerlink" title="StatefulSet"></a>StatefulSet</h3><p>最后，使用以下 YAML 配置文件创建 StatefulSet：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: StatefulSet</span><br><span class="line">metadata:</span><br><span class="line">  name: mysql</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: mysql</span><br><span class="line">  serviceName: mysql</span><br><span class="line">  replicas: 3</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: mysql</span><br><span class="line">    spec:</span><br><span class="line">      initContainers:</span><br><span class="line">      - name: init-mysql</span><br><span class="line">        image: mysql:5.7</span><br><span class="line">        command:</span><br><span class="line">        - bash</span><br><span class="line">        - &quot;-c&quot;</span><br><span class="line">        - |</span><br><span class="line">          set -ex</span><br><span class="line">          # Generate mysql server-id from pod ordinal index.</span><br><span class="line">          [[ &#96;hostname&#96; &#x3D;~ -([0-9]+)$ ]] || exit 1</span><br><span class="line">          ordinal&#x3D;$&#123;BASH_REMATCH[1]&#125;</span><br><span class="line">          echo [mysqld] &gt; &#x2F;mnt&#x2F;conf.d&#x2F;server-id.cnf</span><br><span class="line">          # Add an offset to avoid reserved server-id&#x3D;0 value.</span><br><span class="line">          echo server-id&#x3D;$((100 + $ordinal)) &gt;&gt; &#x2F;mnt&#x2F;conf.d&#x2F;server-id.cnf</span><br><span class="line">          # Copy appropriate conf.d files from config-map to emptyDir.</span><br><span class="line">          if [[ $ordinal -eq 0 ]]; then</span><br><span class="line">            cp &#x2F;mnt&#x2F;config-map&#x2F;master.cnf &#x2F;mnt&#x2F;conf.d&#x2F;</span><br><span class="line">          else</span><br><span class="line">            cp &#x2F;mnt&#x2F;config-map&#x2F;slave.cnf &#x2F;mnt&#x2F;conf.d&#x2F;</span><br><span class="line">          fi          </span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: conf</span><br><span class="line">          mountPath: &#x2F;mnt&#x2F;conf.d</span><br><span class="line">        - name: config-map</span><br><span class="line">          mountPath: &#x2F;mnt&#x2F;config-map</span><br><span class="line">      - name: clone-mysql</span><br><span class="line">        image: ist0ne&#x2F;xtrabackup:1.0</span><br><span class="line">        command:</span><br><span class="line">        - bash</span><br><span class="line">        - &quot;-c&quot;</span><br><span class="line">        - |</span><br><span class="line">          set -ex</span><br><span class="line">          # Skip the clone if data already exists.</span><br><span class="line">          [[ -d &#x2F;var&#x2F;lib&#x2F;mysql&#x2F;mysql ]] &amp;&amp; exit 0</span><br><span class="line">          # Skip the clone on master (ordinal index 0).</span><br><span class="line">          [[ &#96;hostname&#96; &#x3D;~ -([0-9]+)$ ]] || exit 1</span><br><span class="line">          ordinal&#x3D;$&#123;BASH_REMATCH[1]&#125;</span><br><span class="line">          [[ $ordinal -eq 0 ]] &amp;&amp; exit 0</span><br><span class="line">          # Clone data from previous peer.</span><br><span class="line">          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">          # Prepare the backup.</span><br><span class="line">          xtrabackup --prepare --target-dir&#x3D;&#x2F;var&#x2F;lib&#x2F;mysql          </span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: data</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">          subPath: mysql</span><br><span class="line">        - name: conf</span><br><span class="line">          mountPath: &#x2F;etc&#x2F;mysql&#x2F;conf.d</span><br><span class="line">      containers:</span><br><span class="line">      - name: mysql</span><br><span class="line">        image: mysql:5.7</span><br><span class="line">        env:</span><br><span class="line">        - name: MYSQL_ALLOW_EMPTY_PASSWORD</span><br><span class="line">          value: &quot;1&quot;</span><br><span class="line">        ports:</span><br><span class="line">        - name: mysql</span><br><span class="line">          containerPort: 3306</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: data</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">          subPath: mysql</span><br><span class="line">        - name: conf</span><br><span class="line">          mountPath: &#x2F;etc&#x2F;mysql&#x2F;conf.d</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            cpu: 500m</span><br><span class="line">            memory: 1Gi</span><br><span class="line">        livenessProbe:</span><br><span class="line">          exec:</span><br><span class="line">            command: [&quot;mysqladmin&quot;, &quot;ping&quot;]</span><br><span class="line">          initialDelaySeconds: 30</span><br><span class="line">          periodSeconds: 10</span><br><span class="line">          timeoutSeconds: 5</span><br><span class="line">        readinessProbe:</span><br><span class="line">          exec:</span><br><span class="line">            # Check we can execute queries over TCP (skip-networking is off).</span><br><span class="line">            command: [&quot;mysql&quot;, &quot;-h&quot;, &quot;127.0.0.1&quot;, &quot;-e&quot;, &quot;SELECT 1&quot;]</span><br><span class="line">          initialDelaySeconds: 5</span><br><span class="line">          periodSeconds: 2</span><br><span class="line">          timeoutSeconds: 1</span><br><span class="line">      - name: xtrabackup</span><br><span class="line">        image: ist0ne&#x2F;xtrabackup:1.0</span><br><span class="line">        ports:</span><br><span class="line">        - name: xtrabackup</span><br><span class="line">          containerPort: 3307</span><br><span class="line">        command:</span><br><span class="line">        - bash</span><br><span class="line">        - &quot;-c&quot;</span><br><span class="line">        - |</span><br><span class="line">          set -ex</span><br><span class="line">          cd &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line"></span><br><span class="line">          # Determine binlog position of cloned data, if any.</span><br><span class="line">          if [[ -f xtrabackup_slave_info &amp;&amp; &quot;x$(&lt;xtrabackup_slave_info)&quot; !&#x3D; &quot;x&quot; ]]; then</span><br><span class="line">            # XtraBackup already generated a partial &quot;CHANGE MASTER TO&quot; query</span><br><span class="line">            # because we&#39;re cloning from an existing slave. (Need to remove the tailing semicolon!)</span><br><span class="line">            cat xtrabackup_slave_info | sed -E &#39;s&#x2F;;$&#x2F;&#x2F;g&#39; &gt; change_master_to.sql.in</span><br><span class="line">            # Ignore xtrabackup_binlog_info in this case (it&#39;s useless).</span><br><span class="line">            rm -f xtrabackup_slave_info xtrabackup_binlog_info</span><br><span class="line">          elif [[ -f xtrabackup_binlog_info ]]; then</span><br><span class="line">            # We&#39;re cloning directly from master. Parse binlog position.</span><br><span class="line">            [[ &#96;cat xtrabackup_binlog_info&#96; &#x3D;~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1</span><br><span class="line">            rm -f xtrabackup_binlog_info xtrabackup_slave_info</span><br><span class="line">            echo &quot;CHANGE MASTER TO MASTER_LOG_FILE&#x3D;&#39;$&#123;BASH_REMATCH[1]&#125;&#39;,\</span><br><span class="line">                  MASTER_LOG_POS&#x3D;$&#123;BASH_REMATCH[2]&#125;&quot; &gt; change_master_to.sql.in</span><br><span class="line">          fi</span><br><span class="line"></span><br><span class="line">          # Check if we need to complete a clone by starting replication.</span><br><span class="line">          if [[ -f change_master_to.sql.in ]]; then</span><br><span class="line">            echo &quot;Waiting for mysqld to be ready (accepting connections)&quot;</span><br><span class="line">            until mysql -h 127.0.0.1 -e &quot;SELECT 1&quot;; do sleep 1; done</span><br><span class="line"></span><br><span class="line">            echo &quot;Initializing replication from clone position&quot;</span><br><span class="line">            mysql -h 127.0.0.1 \</span><br><span class="line">                  -e &quot;$(&lt;change_master_to.sql.in), \</span><br><span class="line">                          MASTER_HOST&#x3D;&#39;mysql-0.mysql&#39;, \</span><br><span class="line">                          MASTER_USER&#x3D;&#39;root&#39;, \</span><br><span class="line">                          MASTER_PASSWORD&#x3D;&#39;&#39;, \</span><br><span class="line">                          MASTER_CONNECT_RETRY&#x3D;10; \</span><br><span class="line">                        START SLAVE;&quot; || exit 1</span><br><span class="line">            # In case of container restart, attempt this at-most-once.</span><br><span class="line">            mv change_master_to.sql.in change_master_to.sql.orig</span><br><span class="line">          fi</span><br><span class="line"></span><br><span class="line">          # Start a server to send backups when requested by peers.</span><br><span class="line">          exec ncat --listen --keep-open --send-only --max-conns&#x3D;1 3307 -c \</span><br><span class="line">            &quot;xtrabackup --backup --slave-info --stream&#x3D;xbstream --host&#x3D;127.0.0.1 --user&#x3D;root&quot;          </span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: data</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;mysql</span><br><span class="line">          subPath: mysql</span><br><span class="line">        - name: conf</span><br><span class="line">          mountPath: &#x2F;etc&#x2F;mysql&#x2F;conf.d</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            cpu: 100m</span><br><span class="line">            memory: 100Mi</span><br><span class="line">      volumes:</span><br><span class="line">      - name: conf</span><br><span class="line">        emptyDir: &#123;&#125;</span><br><span class="line">      - name: config-map</span><br><span class="line">        configMap:</span><br><span class="line">          name: mysql</span><br><span class="line">  volumeClaimTemplates:</span><br><span class="line">  - metadata:</span><br><span class="line">      name: data</span><br><span class="line">    spec:</span><br><span class="line">      storageClassName: mysql</span><br><span class="line">      accessModes: [&quot;ReadWriteOnce&quot;]</span><br><span class="line">      resources:</span><br><span class="line">        requests:</span><br><span class="line">          storage: 20Gi</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f mysql-statefulset.yaml</span><br></pre></td></tr></table></figure><p>你可以通过运行以下命令查看启动进度：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pods -l app&#x3D;mysql --watch</span><br></pre></td></tr></table></figure><p>一段时间后，你应该看到所有 3 个 Pod 进入 Running 状态：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">NAME      READY     STATUS    RESTARTS   AGE</span><br><span class="line">mysql-0   2&#x2F;2       Running   0          2m</span><br><span class="line">mysql-1   2&#x2F;2       Running   0          1m</span><br><span class="line">mysql-2   2&#x2F;2       Running   0          1m</span><br></pre></td></tr></table></figure><p>输入 <strong>Ctrl+C</strong> 结束 watch 操作。如果你看不到任何进度，确保已启用 动态 PersistentVolume 预配器。</p><h2 id="了解有状态的-Pod-初始化"><a href="#了解有状态的-Pod-初始化" class="headerlink" title="了解有状态的 Pod 初始化"></a>了解有状态的 Pod 初始化</h2><p>StatefulSet 控制器按序数索引顺序地每次启动一个 Pod。它一直等到每个 Pod 报告就绪才再启动下一个 Pod。</p><p>此外，控制器为每个 Pod 分配一个唯一、稳定的名称，形如 <code>&lt;statefulset 名称&gt;-&lt;序数索引&gt;</code>其结果是 Pods 名为 <code>mysql-0</code>、<code>mysql-1</code> 和 <code>mysql-2</code>。</p><p>上述 StatefulSet 清单中的 Pod 模板利用这些属性来执行 MySQL 副本的有序启动。</p><h3 id="生成配置"><a href="#生成配置" class="headerlink" title="生成配置"></a>生成配置</h3><p>在启动 Pod 规约中的任何容器之前，Pod 首先按顺序运行所有的 Init 容器</p><p>第一个名为 <code>init-mysql</code> 的 Init 容器根据序号索引生成特殊的 MySQL 配置文件。</p><p>该脚本通过从 Pod 名称的末尾提取索引来确定自己的序号索引，而 Pod 名称由 <code>hostname</code> 命令返回。然后将序数（带有数字偏移量以避免保留值）保存到 MySQL conf.d 目录中的文件 server-id.cnf。这一操作将 StatefulSet 所提供的唯一、稳定的标识转换为 MySQL 服务器的 ID，<br>而这些 ID 也是需要唯一性、稳定性保证的。</p><p>通过将内容复制到 conf.d 中，<code>init-mysql</code> 容器中的脚本也可以应用 ConfigMap 中的 <code>primary.cnf</code> 或 <code>replica.cnf</code>。由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成，因此脚本仅将序数 <code>0</code> 指定为主节点，而将其他所有节点指定为副本节点。</p><p>与 StatefulSet 控制器的 部署顺序保证相结合，可以确保 MySQL 主服务器在创建副本服务器之前已准备就绪，以便它们可以开始复制。</p><h3 id="克隆现有数据"><a href="#克隆现有数据" class="headerlink" title="克隆现有数据"></a>克隆现有数据</h3><p>通常，当新 Pod 作为副本节点加入集合时，必须假定 MySQL 主节点可能已经有数据。还必须假设复制日志可能不会一直追溯到时间的开始。</p><p>这些保守的假设是允许正在运行的 StatefulSet 随时间扩大和缩小而不是固定在其初始大小的关键。</p><p>第二个名为 <code>clone-mysql</code> 的 Init 容器，第一次在带有空 PersistentVolume 的副本 Pod上启动时，会在从属 Pod 上执行克隆操作。<br>这意味着它将从另一个运行中的 Pod 复制所有现有数据，使此其本地状态足够一致，从而可以开始从主服务器复制。</p><p>MySQL 本身不提供执行此操作的机制，因此本示例使用了一种流行的开源工具 Percona XtraBackup。在克隆期间，源 MySQL 服务器性能可能会受到影响。为了最大程度地减少对 MySQL 主服务器的影响，该脚本指示每个 Pod 从序号较低的 Pod 中克隆。可以这样做的原因是 StatefulSet 控制器始终确保在启动 Pod N + 1 之前 Pod N 已准备就绪。</p><h3 id="开始复制"><a href="#开始复制" class="headerlink" title="开始复制"></a>开始复制</h3><p>Init 容器成功完成后，应用容器将运行。MySQL Pod 由运行实际 <code>mysqld</code> 服务的 <code>mysql</code> 容器和充当的 xtrabackup 容器组成。</p><p><code>xtrabackup</code> sidecar 容器查看克隆的数据文件，并确定是否有必要在副本服务器上初始化 MySQL 复制。如果是这样，它将等待 <code>mysqld</code> 准备就绪，然后使用从 XtraBackup 克隆文件中提取的复制参数执行  <code>CHANGE MASTER TO</code> 和 <code>START SLAVE</code> 命令。</p><p>一旦副本服务器开始复制后，它会记住其 MySQL 主服务器，并且如果服务器重新启动或连接中断也会自动重新连接。另外，因为副本服务器会以其稳定的 DNS 名称查找主服务器（<code>mysql-0.mysql</code>），即使由于重新调度而获得新的 Pod IP，它们也会自动找到主服务器。</p><p>最后，开始复制后，<code>xtrabackup</code> 容器监听来自其他 Pod 的连接，处理其数据克隆请求。如果 StatefulSet 扩大规模，或者下一个 Pod 失去其 PersistentVolumeClaim 并需要重新克隆，则此服务器将无限期保持运行。</p><h2 id="发送客户端请求"><a href="#发送客户端请求" class="headerlink" title="发送客户端请求"></a>发送客户端请求</h2><p>你可以通过运行带有 <code>mysql:5.7</code> 镜像的临时容器并运行 <code>mysql</code> 客户端二进制文件，将测试查询发送到 MySQL 主服务器（主机名 <code>mysql-0.mysql</code>）。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">#进入主内部</span><br><span class="line">kubectl exec -it mysql-0  -n &lt;namespace&gt;  -- &#x2F;bin&#x2F;sh</span><br><span class="line"></span><br><span class="line">#执行或者单独另启动一个客户端执行</span><br><span class="line">mysql -h mysql-0.mysql &lt;&lt;EOF</span><br><span class="line">CREATE DATABASE test;</span><br><span class="line">CREATE TABLE test.messages (message VARCHAR(250));</span><br><span class="line">INSERT INTO test.messages VALUES (&#39;hello&#39;);</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><p>使用主机名 <code>mysql-read</code> 将测试查询发送到任何报告为就绪的服务器：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">#进入主内部</span><br><span class="line">kubectl exec -it mysql-0  -n &lt;namespace&gt;  -- &#x2F;bin&#x2F;sh</span><br><span class="line"></span><br><span class="line">#执行或者单独另启动一个客户端执行</span><br><span class="line">mysql -h mysql-read -e &quot;SELECT * FROM test.messages&quot;</span><br></pre></td></tr></table></figure><p>你应该获得如下输出：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">+---------+</span><br><span class="line">| message |</span><br><span class="line">+---------+</span><br><span class="line">| hello   |</span><br><span class="line">+---------+</span><br></pre></td></tr></table></figure><p>为了演示 <code>mysql-read</code> 服务在服务器之间分配连接，你可以在循环中运行 <code>SELECT @@server_id</code>：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">#进入主内部</span><br><span class="line">kubectl exec -it mysql-0  -n &lt;namespace&gt;  -- &#x2F;bin&#x2F;sh</span><br><span class="line"></span><br><span class="line">#执行或者单独另启动一个客户端执行</span><br><span class="line">bash -ic &quot;while sleep 1; do mysql -h mysql-read -e &#39;SELECT @@server_id,NOW()&#39;; done&quot;</span><br></pre></td></tr></table></figure><p>你应该看到报告的 <code>@@server_id</code> 发生随机变化，因为每次尝试连接时都可能选择了不同的端点：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">#如果进入的主执行则结果显示ID&#96;102&#96;与&#96;101&#96;｜另客户端执行 则多显示ID&#96;100&#96;，因为主默认ID&#96;100&#96;</span><br><span class="line">+-------------+---------------------+</span><br><span class="line">| @@server_id | NOW()               |</span><br><span class="line">+-------------+---------------------+</span><br><span class="line">|         102 | 2006-01-02 15:04:06 |</span><br><span class="line">+-------------+---------------------+</span><br><span class="line">+-------------+---------------------+</span><br><span class="line">| @@server_id | NOW()               |</span><br><span class="line">+-------------+---------------------+</span><br><span class="line">|         101 | 2006-01-02 15:04:07 |</span><br><span class="line">+-------------+---------------------+</span><br></pre></td></tr></table></figure><p>要停止循环时可以按 <strong>Ctrl+C</strong> ，但是让它在另一个窗口中运行非常有用，这样你就可以看到以下步骤的效果。</p><h2 id="模拟-Pod-和-Node-的宕机时间"><a href="#模拟-Pod-和-Node-的宕机时间" class="headerlink" title="模拟 Pod 和 Node 的宕机时间"></a>模拟 Pod 和 Node 的宕机时间</h2><p>为了证明从副本节点缓存而不是单个服务器读取数据的可用性提高，请在使 Pod 退出 Ready状态时，保持上述 <code>SELECT @@server_id</code> 循环一直运行。</p><h3 id="破坏就绪态探测"><a href="#破坏就绪态探测" class="headerlink" title="破坏就绪态探测"></a>破坏就绪态探测</h3><p><code>mysql</code> 容器的运行命令 <code>mysql -h 127.0.0.1 -e &#39;SELECT 1&#39;</code>，以确保服务器已启动并能够执行查询。</p><p>迫使就绪态探测失败的一种方法就是中止该命令：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl exec mysql-2 -c mysql -- mv &#x2F;usr&#x2F;bin&#x2F;mysql &#x2F;usr&#x2F;bin&#x2F;mysql.off</span><br></pre></td></tr></table></figure><p>此命令会进入 Pod <code>mysql-2</code> 的实际容器文件系统，重命名 <code>mysql</code> 命令，导致就绪态探测无法找到它。几秒钟后， Pod 会报告其中一个容器未就绪。你可以通过运行以下命令进行检查：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod mysql-2</span><br></pre></td></tr></table></figure><p>在 <code>READY</code> 列中查找 <code> 1/2</code> ：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">NAME      READY     STATUS    RESTARTS   AGE</span><br><span class="line">mysql-2   1&#x2F;2       Running   0          3m</span><br></pre></td></tr></table></figure><p>此时，你应该会看到 <code>SELECT @@server_id</code> 循环继续运行，尽管它不再报告 <code>102</code>。回想一下，<code>init-mysql</code> 脚本将 <code>server-id</code> 定义为 <code>100 + $ordinal</code>，因此服务器 ID <code>102</code> 对应于 Pod <code>mysql-2</code>。</p><p>现在修复 Pod，几秒钟后它应该重新出现在循环输出中：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl exec mysql-2 -c mysql -- mv &#x2F;usr&#x2F;bin&#x2F;mysql.off &#x2F;usr&#x2F;bin&#x2F;mysql</span><br></pre></td></tr></table></figure><h3 id="删除-Pods"><a href="#删除-Pods" class="headerlink" title="删除 Pods"></a>删除 Pods</h3><p>如果删除了 Pod，则 StatefulSet 还会重新创建 Pod，类似于 ReplicaSet 对无状态 Pod 所做的操作。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl delete pod mysql-2</span><br></pre></td></tr></table></figure><p>StatefulSet 控制器注意到不再存在 <code>mysql-2</code> Pod，于是创建一个具有相同名称并链接到相同PersistentVolumeClaim 的新 Pod。你应该看到服务器 ID <code>102</code> 从循环输出中消失了一段时间，然后又自行出现。</p><h3 id="腾空节点"><a href="#腾空节点" class="headerlink" title="腾空节点"></a>腾空节点</h3><p>如果你的 Kubernetes 其中一个节点 设置<code>不可调度</code>，则可以通过发出以下命令来模拟节点停机（就好像节点在被升级）。</p><p>首先确定 MySQL Pod 之一在哪个节点上：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod mysql-2 -o wide</span><br></pre></td></tr></table></figure><p>节点名称应显示在最后一列中：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">NAME      READY     STATUS    RESTARTS   AGE       IP            NODE</span><br><span class="line">mysql-2   2&#x2F;2       Running   0          15m       10.244.5.27   kubernetes-node-9l2t</span><br></pre></td></tr></table></figure><p>然后通过运行以下命令腾空节点，该命令将其保护起来，以使新的 Pod 不能调度到该节点，然后逐出所有现有的 Pod。将 <code>&lt;节点名称&gt;</code> 替换为在上一步中找到的节点名称。</p><p>这可能会影响节点上的其他应用程序，因此最好 <code>仅在测试集群中执行此操作</code></p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl drain &lt;节点名称&gt; --force --delete-local-data --ignore-daemonsets</span><br></pre></td></tr></table></figure><p>现在，你可以看到 Pod 被重新调度到其他节点上：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pod mysql-2 -o wide --watch</span><br></pre></td></tr></table></figure><p>它看起来应该像这样：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">NAME      READY   STATUS          RESTARTS   AGE       IP            NODE</span><br><span class="line">mysql-2   2&#x2F;2     Terminating     0          15m       10.244.1.56   kubernetes-node-9l2t</span><br><span class="line">[...]</span><br><span class="line">mysql-2   0&#x2F;2     Pending         0          0s        &lt;none&gt;        kubernetes-node-fjlm</span><br><span class="line">mysql-2   0&#x2F;2     Init:0&#x2F;2        0          0s        &lt;none&gt;        kubernetes-node-fjlm</span><br><span class="line">mysql-2   0&#x2F;2     Init:1&#x2F;2        0          20s       10.244.5.32   kubernetes-node-fjlm</span><br><span class="line">mysql-2   0&#x2F;2     PodInitializing 0          21s       10.244.5.32   kubernetes-node-fjlm</span><br><span class="line">mysql-2   1&#x2F;2     Running         0          22s       10.244.5.32   kubernetes-node-fjlm</span><br><span class="line">mysql-2   2&#x2F;2     Running         0          30s       10.244.5.32   kubernetes-node-fjlm</span><br></pre></td></tr></table></figure><p>再次，你应该看到服务器 ID <code>102</code> 从 <code>SELECT @@server_id</code> 循环输出中消失一段时间，然后自行出现。</p><p>现在去掉节点保护（Uncordon），使其恢复为正常模式:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl uncordon &lt;节点名称&gt;</span><br></pre></td></tr></table></figure><h2 id="扩展副本节点数量"><a href="#扩展副本节点数量" class="headerlink" title="扩展副本节点数量"></a>扩展副本节点数量</h2><p>使用 MySQL 复制，你可以通过添加副本节点来扩展读取查询的能力。使用 StatefulSet，你可以使用单个命令执行此操作：</p><blockquote><p>注意：要有满足伸缩的 PersistentVolume  配置</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl scale statefulset mysql --replicas&#x3D;5</span><br></pre></td></tr></table></figure><p>查看新的 Pod 的运行情况：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pods -l app&#x3D;mysql --watch</span><br></pre></td></tr></table></figure><p>一旦 Pod 启动，你应该看到服务器 IDs <code>103</code> 和 <code>104</code> 开始出现在 <code>SELECT @@server_id</code> 循环输出中。</p><p>你还可以验证这些新服务器在存在之前已添加了数据：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">#进入主内部</span><br><span class="line">kubectl exec -it mysql-0  -n &lt;namespace&gt;  -- &#x2F;bin&#x2F;sh</span><br><span class="line"></span><br><span class="line">#执行或者单独另启动一个客户端执行</span><br><span class="line">mysql -h mysql-3.mysql -e &quot;SELECT * FROM test.messages&quot;</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">+---------+</span><br><span class="line">| message |</span><br><span class="line">+---------+</span><br><span class="line">| hello   |</span><br><span class="line">+---------+</span><br></pre></td></tr></table></figure><p>向下缩容操作也是很平滑的：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl scale statefulset mysql --replicas&#x3D;3</span><br></pre></td></tr></table></figure><p>但是请注意，按比例扩大会自动创建新的 PersistentVolumeClaims，而按比例缩小不会自动删除这些 PVC。这使你可以选择保留那些初始化的 PVC，以更快地进行缩放，或者在删除它们之前提取数据。</p><p>你可以通过运行以下命令查看此信息：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pvc -l app&#x3D;mysql</span><br></pre></td></tr></table></figure><p>这表明，尽管将 StatefulSet 缩小为3，所有5个 PVC 仍然存在：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">NAME           STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE</span><br><span class="line">data-mysql-0   Bound     pvc-8acbf5dc-b103-11e6-93fa-42010a800002   10Gi       RWO           20m</span><br><span class="line">data-mysql-1   Bound     pvc-8ad39820-b103-11e6-93fa-42010a800002   10Gi       RWO           20m</span><br><span class="line">data-mysql-2   Bound     pvc-8ad69a6d-b103-11e6-93fa-42010a800002   10Gi       RWO           20m</span><br><span class="line">data-mysql-3   Bound     pvc-50043c45-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m</span><br><span class="line">data-mysql-4   Bound     pvc-500a9957-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m</span><br></pre></td></tr></table></figure><p>如果你不打算重复使用多余的 PVC，则可以删除它们：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl delete pvc data-mysql-3</span><br><span class="line">kubectl delete pvc data-mysql-4</span><br></pre></td></tr></table></figure><ol><li><p>通过在终端上按 <strong>Ctrl+C</strong> 取消 <code>SELECT @@server_id</code> 循环，或从另一个终端运行以下命令：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl delete pod mysql-client-loop --now</span><br></pre></td></tr></table></figure></li><li><p>删除 StatefulSet。这也会开始终止 Pod。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl delete statefulset mysql</span><br></pre></td></tr></table></figure></li><li><p>验证 Pod 消失。他们可能需要一些时间才能完成终止。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pods -l app&#x3D;mysql</span><br></pre></td></tr></table></figure><p>当上述命令返回如下内容时，你就知道 Pod 已终止：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">No resources found.</span><br></pre></td></tr></table></figure></li><li><p>删除 ConfigMap、Services 和 PersistentVolumeClaims。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl delete configmap,service,pvc -l app&#x3D;mysql</span><br></pre></td></tr></table></figure></li><li><p>如果你手动供应 PersistentVolume，则还需要手动删除它们，并释放下层资源。如果你使用了动态预配器，当得知你删除 PersistentVolumeClaims 时，它将自动删除 PersistentVolumes。一些动态预配器（例如用于 EBS 和 PD 的预配器）也会在删除 PersistentVolumes 时释放下层资源。</p></li></ol><blockquote><p>详细参考：<a href="https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/">https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/</a><br><a href="https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/">https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/</a></p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;Statefulset-MySQL&quot;&gt;&lt;a href=&quot;#Statefulset-MySQL&quot; class=&quot;headerlink&quot; title=&quot;Statefulset MySQL&quot;&gt;&lt;/a&gt;Statefulset MySQL&lt;/h2&gt;&lt;p&gt;此例是多副本的 My</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="mysql" scheme="https://imszz.com/tags/mysql/"/>
    
  </entry>
  
  <entry>
    <title>K8S基于ingress-nginx实现灰度发布</title>
    <link href="https://imszz.com/p/a6d2532/"/>
    <id>https://imszz.com/p/a6d2532/</id>
    <published>2021-03-30T06:00:25.000Z</published>
    <updated>2021-03-30T07:41:25.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="注解说明"><a href="#注解说明" class="headerlink" title="注解说明"></a>注解说明</h2><p>通过给 Ingress 资源指定 Nginx Ingress 所支持的 annotation 可实现金丝雀发布。需给服务创建2个 Ingress，其中1个常规 Ingress，另1个为<code>nginx.ingress.kubernetes.io/canary: &quot;true&quot;</code>· 固定的 annotation 的 Ingress，称为 Canary Ingress。Canary Ingress 一般代表新版本的服务，结合另外针对流量切分策略的 annotation 一起配置即可实现多种场景的金丝雀发布。以下为相关 annotation 的详细介绍：</p><ul><li><code>nginx.ingress.kubernetes.io/canary-by-header</code><br>表示如果请求头中包含指定的 header 名称，并且值为 always，就将该请求转发给该 Ingress 定义的对应后端服务。如果值为 never 则不转发，可以用于回滚到旧版。如果为其他值则忽略该 annotation。</li><li><code>nginx.ingress.kubernetes.io/canary-by-header-value</code><br>该 annotation 可以作为 canary-by-header 的补充，可指定请求头为自定义值，包含但不限于 always 或 never。当请求头的值命中指定的自定义值时，请求将会转发给该 Ingress 定义的对应后端服务，如果是其它值则忽略该 annotation。</li><li><code>nginx.ingress.kubernetes.io/canary-by-header-pattern</code><br>与 canary-by-header-value 类似，区别为该 annotation 用正则表达式匹配请求头的值，而不是只固定某一个值。如果该 annotation 与 canary-by-header-value 同时存在，该 annotation 将被忽略。</li><li><code>nginx.ingress.kubernetes.io/canary-by-cookie</code><br>与 canary-by-header 类似，该 annotation 用于 cookie，仅支持 always 和 never。</li><li><code>nginx.ingress.kubernetes.io/canary-weight</code><br>表示 Canary Ingress 所分配流量的比例的百分比，取值范围 [0-100]。例如，设置为10，则表示分配10%的流量给 Canary Ingress 对应的后端服务。</li></ul><blockquote><p>说明：<br>以上规则会按优先顺序进行评估，优先顺序为： <code>canary-by-header -&gt; canary-by-cookie -&gt; canary-weight</code>。<br>当 Ingress 被标记为 Canary Ingress 时，除了 <code>nginx.ingress.kubernetes.io/load-balance</code> 和 <code>nginx.ingress.kubernetes.io/upstream-hash-by</code> 外，所有其他非 Canary 注释都将被忽略。</p></blockquote><p>可以把以上的四个 <code>annotation</code> 分为三类：</p><ol><li>基于Request Header的流量切分，适用于灰度发布以及AB测试场景</li><li>基于Cookie的流量切分，适用于灰度发布以及AB测试场景</li><li>基于服务权重的流量切分，适用于蓝绿发布场景</li></ol><p>总体划分为以下两大类：</p><ol><li><p>基于权重的 Canary 规则<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/Canary-weight.jpg" alt="github--lena"></p></li><li><p>基于用户请求的 Canary 规则<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/Canary-cookie.jpg" alt="github--lena"></p></li></ol><blockquote><p>注意： Ingress-Nginx 实在0.21.0 版本 中，引入的Canary 功能，因此要确保ingress版本OK</p></blockquote><h2 id="部署正式版本服务"><a href="#部署正式版本服务" class="headerlink" title="部署正式版本服务"></a>部署正式版本服务</h2><p>首先创建一个 deployment 代表正式版本的服务，编写 yaml 内容如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br></pre></td><td class="code"><pre><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Namespace</span><br><span class="line">metadata:</span><br><span class="line">  name: ns-myapp</span><br><span class="line">  labels:</span><br><span class="line">    name: ns-myapp</span><br><span class="line"></span><br><span class="line">---</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  name: production</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: production</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: production</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: production</span><br><span class="line">        image: mirrorgooglecontainers&#x2F;echoserver:1.10</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 8080</span><br><span class="line">        env:</span><br><span class="line">          - name: NODE_NAME</span><br><span class="line">            valueFrom:</span><br><span class="line">              fieldRef:</span><br><span class="line">                fieldPath: spec.nodeName</span><br><span class="line">          - name: POD_NAME</span><br><span class="line">            valueFrom:</span><br><span class="line">              fieldRef:</span><br><span class="line">                fieldPath: metadata.name</span><br><span class="line">          - name: POD_NAMESPACE</span><br><span class="line">            valueFrom:</span><br><span class="line">              fieldRef:</span><br><span class="line">                fieldPath: metadata.namespace</span><br><span class="line">          - name: POD_IP</span><br><span class="line">            valueFrom:</span><br><span class="line">              fieldRef:</span><br><span class="line">                fieldPath: status.podIP</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: production</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  labels:</span><br><span class="line">    app: production</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - port: 80</span><br><span class="line">    targetPort: 8080</span><br><span class="line">    protocol: TCP</span><br><span class="line">    name: http</span><br><span class="line">  selector:</span><br><span class="line">    app: production</span><br><span class="line">    </span><br></pre></td></tr></table></figure><p>为这个服务创建 Ingress 路由规则，yaml 文件内容如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: production</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  annotations:</span><br><span class="line">    kubernetes.io&#x2F;ingress.class: nginx</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: ingress.test.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - backend:</span><br><span class="line">          serviceName: production</span><br><span class="line">          servicePort: 80</span><br><span class="line">          </span><br></pre></td></tr></table></figure><p>应用以上 yaml 文件，创建完成后在 k8s 中查看到如下信息：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~]# kubectl get ingress -n ns-myapp</span><br><span class="line">NAME         CLASS    HOSTS              ADDRESS        PORTS   AGE</span><br><span class="line">production   &lt;none&gt;   ingress.test.com   10.16.13.201   80      4m25s</span><br><span class="line"></span><br><span class="line">[k8s-master ~]# kubectl get pod -n ns-myapp</span><br><span class="line">NAME                          READY   STATUS    RESTARTS   AGE</span><br><span class="line">production-5698c4565c-jmjn5   1&#x2F;1     Running   0          7m11s</span><br></pre></td></tr></table></figure><p>此时在命令行中访问 <code>ingress.test.com</code> 可以看到如下内容：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"># curl ingress.test.com</span><br><span class="line"></span><br><span class="line">Hostname: production-5698c4565c-jmjn5</span><br><span class="line"></span><br><span class="line">Pod Information:</span><br><span class="line">    node name:  dumlog013201</span><br><span class="line">    pod name:   production-5698c4565c-jmjn5</span><br><span class="line">    pod namespace:  ns-myapp</span><br><span class="line">    pod IP: 10.42.0.74</span><br><span class="line"></span><br><span class="line">Server values:</span><br><span class="line">    server_version&#x3D;nginx: 1.13.3 - lua: 10008</span><br><span class="line"></span><br><span class="line">Request Information:</span><br><span class="line">    client_address&#x3D;10.16.13.201</span><br><span class="line">    method&#x3D;GET</span><br><span class="line">    real path&#x3D;&#x2F;</span><br><span class="line">    query&#x3D;</span><br><span class="line">    request_version&#x3D;1.1</span><br><span class="line">    request_scheme&#x3D;http</span><br><span class="line">    request_uri&#x3D;http:&#x2F;&#x2F;ingress.test.com:8080&#x2F;</span><br><span class="line"></span><br><span class="line">Request Headers:</span><br><span class="line">    accept&#x3D;*&#x2F;*</span><br><span class="line">    host&#x3D;ingress.test.com</span><br><span class="line">    user-agent&#x3D;curl&#x2F;7.64.1</span><br><span class="line">    x-forwarded-for&#x3D;10.2.130.18</span><br><span class="line">    x-forwarded-host&#x3D;ingress.test.com</span><br><span class="line">    x-forwarded-port&#x3D;80</span><br><span class="line">    x-forwarded-proto&#x3D;http</span><br><span class="line">    x-real-ip&#x3D;10.2.130.18</span><br><span class="line">    x-request-id&#x3D;3019362be59228ee2284f5737fa39eb1</span><br><span class="line">    x-scheme&#x3D;http</span><br><span class="line"></span><br><span class="line">Request Body:</span><br><span class="line">    -no body in request-</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="部署-Canary-版本服务"><a href="#部署-Canary-版本服务" class="headerlink" title="部署 Canary 版本服务"></a>部署 Canary 版本服务</h2><p>接下来创建一个 Canary 版本的服务，用于作为灰度测试。</p><p>参考将上述 Production 版本的 <code>production.yaml</code> 文件，再创建一个 Canary 版本的应用，包括一个 Canary 版本的 <code>deployment</code> 和 <code>service</code> (为方便快速演示，仅需将 production.yaml 的 <code>deployment</code>和 <code>service</code> 中的关键字 <code>production</code> 直接替换为 <code>canary</code>，实际场景中可能涉及业务代码变更)。</p><h3 id="基于权重的-Canary-规则测试"><a href="#基于权重的-Canary-规则测试" class="headerlink" title="基于权重的 Canary 规则测试"></a>基于权重的 Canary 规则测试</h3><p>基于权重的流量切分的典型应用场景就是<code>蓝绿部署</code>，可通过将权重设置为 0 或 100 来实现。例如，可将 Green 版本设置为主要部分，并将 Blue 版本的入口配置为 Canary。最初，将权重设置为 0，因此不会将流量代理到 Blue 版本。一旦新版本测试和验证都成功后，即可将 Blue 版本的权重设置为 100，即所有流量从 Green 版本转向 Blue。</p><p>使用以下 <code>canary.ingress</code> 的 yaml 文件再创建一个基于权重的 Canary 版本的应用路由 (Ingress)。</p><blockquote><p>注意：要开启灰度发布机制，首先需设置 <code>nginx.ingress.kubernetes.io/canary: &quot;true&quot;</code> 启用 Canary，以下 Ingress 示例的 Canary 版本使用了基于权重进行流量切分的 annotation 规则，将分配 30% 的流量请求发送至 Canary 版本。</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: canary</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  annotations:</span><br><span class="line">    kubernetes.io&#x2F;ingress.class: nginx</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary: &quot;true&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-weight: &quot;30&quot;</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: ingress.test.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - backend:</span><br><span class="line">          serviceName: canary</span><br><span class="line">          servicePort: 80</span><br></pre></td></tr></table></figure><p>接下来在命令行中使用如下命令访问域名 ingress.test.com 100次，计算每个版本分配流量的占比：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">c&#x3D;0;p&#x3D;0;for i in $(seq 100); do result&#x3D;$(curl -s ingress.test.com | grep  Hostname | awk -F: &#39;&#123;print $2&#125;&#39;); [[ $&#123;result&#125; &#x3D;~ ^[[:space:]]canary ]] &amp;&amp; let c++ || let p++; done;echo &quot;production:$&#123;p&#125;; canary:$&#123;c&#125;;&quot;</span><br></pre></td></tr></table></figure><p>可以得到如下结果：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">production:73; canary:28;</span><br></pre></td></tr></table></figure><p>注意这里权重不是一个精确的百分比，使用过程当中，只是会看到一个近似分布。</p><h3 id="基于用户请求的-Canary-规则测试"><a href="#基于用户请求的-Canary-规则测试" class="headerlink" title="基于用户请求的 Canary 规则测试"></a>基于用户请求的 Canary 规则测试</h3><h4 id="基于-Resquest-Header"><a href="#基于-Resquest-Header" class="headerlink" title="基于 Resquest Header"></a>基于 Resquest Header</h4><p>基于 Request Header 进行流量切分的典型应用场景即<code>灰度发布或 A/B 测试场景</code>。</p><p>给 Canary 版本的 Ingress 新增一条 annotation ：<code>nginx.ingress.kubernetes.io/canary-by-header: canary</code>（这里的 annotation 的 value 可以是任意值），使当前的 Ingress 实现基于 Request Header 进行流量切分。</p><p>将 Canary 版本 Ingress 的 yaml 文件修改为如下内容：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: canary</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  annotations:</span><br><span class="line">    kubernetes.io&#x2F;ingress.class: nginx</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary: &quot;true&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-weight: &quot;30&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-by-header: &quot;canary&quot;</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: ingress.test.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - backend:</span><br><span class="line">          serviceName: canary</span><br><span class="line">          servicePort: 80</span><br><span class="line"></span><br></pre></td></tr></table></figure><blockquote><p>说明：金丝雀规则按优先顺序 canary-by-header - &gt; canary-by-cookie - &gt; canary-weight 进行如下排序，因此上面的 ingress 将忽略原有 canary-weight 的规则。</p></blockquote><p>由于上面的 ingress 规则中没有对 canary-by-header: <code>canary</code> 提供具体的值，也就是 <code>nginx.ingress.kubernetes.io/canary-by-header-value</code> 规则，所以在访问的时候，只可以为 <code>canary</code> 赋值 <code>never</code> 或 <code>always</code>，当 header 信息为 <code>canary:never</code> 时，请求将不会发送到 canary 版本；当 header 信息为 <code>canary:always</code> 时，请求将会一直发送到 canary 版本。示例如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s -H &quot;canary:never&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: production-5698c4565c-jmjn5</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s -H &quot;canary:always&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: canary-79c899d85-992nw</span><br></pre></td></tr></table></figure><p>也可以在上一个 annotation （即 canary-by-header）的基础上添加一条 <code>nginx.ingress.kubernetes.io/canary-by-header-value: user-value</code> 。用于通知 Ingress 将匹配到的请求路由到 Canary Ingress 中指定的服务。</p><p>将 Canary 版本 Ingress 的 yaml 文件修改为如下内容：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: canary</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  annotations:</span><br><span class="line">    kubernetes.io&#x2F;ingress.class: nginx</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary: &quot;true&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-weight: &quot;30&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-by-header: &quot;canary&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-by-header-value: &quot;true&quot;</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: ingress.test.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - backend:</span><br><span class="line">          serviceName: canary</span><br><span class="line">          servicePort: 80</span><br></pre></td></tr></table></figure><p>上面的 ingress 规则设置了 header 信息为 <code>canary:true</code>，也就是只有满足这个 header 值时才会路由到 canary 版本。示例如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s ingress.test.com | grep Hostname</span><br><span class="line">Hostname: production-5698c4565c-jmjn5</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">[k8s-master ~ ]# curl -s -H &quot;canary:test&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: production-5698c4565c-jmjn5</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s -H &quot;canary:true&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: canary-79c899d85-992nw</span><br></pre></td></tr></table></figure><h4 id="基于-Cookie-的-Canary-规则测试"><a href="#基于-Cookie-的-Canary-规则测试" class="headerlink" title="基于 Cookie 的 Canary 规则测试"></a>基于 Cookie 的 Canary 规则测试</h4><p>与基于 Request Header 的 annotation 用法规则类似。例如在 <code>A/B 测试场景</code> 下，需要让地域为北京的用户访问 Canary 版本。那么当 cookie 的 annotation 设置为 <code>nginx.ingress.kubernetes.io/canary-by-cookie: &quot;users_from_Beijing&quot;</code>，此时后台可对登录的用户请求进行检查，如果该用户访问源来自北京则设置 <code>cookieusers_from_Beijing</code> 的值为 <code>always</code>，这样就可以确保北京的用户仅访问 Canary 版本。</p><p>将 Canary 版本 Ingress 的 yaml 文件修改为如下内容：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: extensions&#x2F;v1beta1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: canary</span><br><span class="line">  namespace: ns-myapp</span><br><span class="line">  annotations:</span><br><span class="line">    kubernetes.io&#x2F;ingress.class: nginx</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary: &quot;true&quot;</span><br><span class="line">    nginx.ingress.kubernetes.io&#x2F;canary-by-cookie: &quot;user_from_beijing&quot;</span><br><span class="line">spec:</span><br><span class="line">  rules:</span><br><span class="line">  - host: ingress.test.com</span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - backend:</span><br><span class="line">          serviceName: canary</span><br><span class="line">          servicePort: 80</span><br></pre></td></tr></table></figure><p>访问示例如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s -b &quot;user_from_beijing&#x3D;always&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: canary-79c899d85-992nw</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[k8s-master ~ ]# curl -s -b &quot;user_from_beijing&#x3D;no&quot; ingress.test.com | grep Hostname</span><br><span class="line">Hostname: production-5698c4565c-jmjn5</span><br></pre></td></tr></table></figure><blockquote><p>多实例Ingress controllers 参考<br><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary</a><br><a href="https://cloud.tencent.com/document/product/457/48907">https://cloud.tencent.com/document/product/457/48907</a></p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;注解说明&quot;&gt;&lt;a href=&quot;#注解说明&quot; class=&quot;headerlink&quot; title=&quot;注解说明&quot;&gt;&lt;/a&gt;注解说明&lt;/h2&gt;&lt;p&gt;通过给 Ingress 资源指定 Nginx Ingress 所支持的 annotation 可实现金丝雀发布。需给服务创建</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="ingress-nginx" scheme="https://imszz.com/tags/ingress-nginx/"/>
    
  </entry>
  
  <entry>
    <title>K8S 部署 Statefulset zookeeper</title>
    <link href="https://imszz.com/p/b8e5c788/"/>
    <id>https://imszz.com/p/b8e5c788/</id>
    <published>2021-03-30T06:00:25.000Z</published>
    <updated>2021-04-01T06:01:25.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="创建存储卷"><a href="#创建存储卷" class="headerlink" title="创建存储卷"></a>创建存储卷</h2><p>Zookeeper集群需要用到存储，这里需要准备持久卷（PersistentVolume，简称PV），我这里以yaml文件创建3个PV，供待会儿3个Zookeeper节点创建出来的持久卷声明</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br></pre></td><td class="code"><pre><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-zk1</span><br><span class="line">  annotations:</span><br><span class="line">    volume.beta.kubernetes.io&#x2F;storage-class: &quot;anything&quot;</span><br><span class="line">  labels:</span><br><span class="line">    type: zookeeper</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 3Gi</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;zookeeper&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br><span class="line">---</span><br><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-zk2</span><br><span class="line">  annotations:</span><br><span class="line">    volume.beta.kubernetes.io&#x2F;storage-class: &quot;anything&quot;</span><br><span class="line">  labels:</span><br><span class="line">    type: zookeeper</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 3Gi</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;zookeeper&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br><span class="line">---</span><br><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-zk3</span><br><span class="line">  annotations:</span><br><span class="line">    volume.beta.kubernetes.io&#x2F;storage-class: &quot;anything&quot;</span><br><span class="line">  labels:</span><br><span class="line">    type: zookeeper</span><br><span class="line">spec:</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 3Gi</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;zookeeper&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br></pre></td></tr></table></figure><h3 id="部署及存储卷状态查询"><a href="#部署及存储卷状态查询" class="headerlink" title="部署及存储卷状态查询"></a>部署及存储卷状态查询</h3><p>这里发现pv和pvc还没有绑定状态是Available</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f persistent-volume.yaml </span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get pv </span><br></pre></td></tr></table></figure><h2 id="新版本创建卷及使用"><a href="#新版本创建卷及使用" class="headerlink" title="新版本创建卷及使用"></a>新版本创建卷及使用</h2><p>建议使用新版创建</p><blockquote><p>Kubernetes 使用注解 <code>volume.beta.kubernetes.io/storage-class</code> 而不是 <code>storageClassName</code> 属性。这一注解目前仍然起作用，不过在将来的 Kubernetes 发布版本中该注解会被彻底废弃。</p></blockquote><h3 id="创建卷"><a href="#创建卷" class="headerlink" title="创建卷"></a>创建卷</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">kind: PersistentVolume</span><br><span class="line">apiVersion: v1</span><br><span class="line">metadata:</span><br><span class="line">  name: k8s-pv-zk1</span><br><span class="line">  labels:</span><br><span class="line">    type: zookeeper</span><br><span class="line">spec:</span><br><span class="line">  storageClassName: disk</span><br><span class="line">  capacity:</span><br><span class="line">    storage: 3Gi</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  hostPath:</span><br><span class="line">    path: &quot;&#x2F;var&#x2F;lib&#x2F;zookeeper&quot;</span><br><span class="line">  persistentVolumeReclaimPolicy: Retain</span><br></pre></td></tr></table></figure><h4 id="存储声明"><a href="#存储声明" class="headerlink" title="存储声明"></a>存储声明</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: PersistentVolumeClaim</span><br><span class="line">metadata:</span><br><span class="line">  name: datadir</span><br><span class="line">spec:</span><br><span class="line">  storageClassName: disk</span><br><span class="line">  accessModes:</span><br><span class="line">    - ReadWriteOnce</span><br><span class="line">  resources:</span><br><span class="line">    requests:</span><br><span class="line">      storage: 3Gi</span><br></pre></td></tr></table></figure><h4 id="pod引用"><a href="#pod引用" class="headerlink" title="pod引用"></a>pod引用</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">          ····</span><br><span class="line">    volumeMounts:</span><br><span class="line">      - name: datadir</span><br><span class="line">        mountPath: &#x2F;var&#x2F;lib&#x2F;zookeeper</span><br><span class="line">volumeClaimTemplates:</span><br><span class="line">- metadata:</span><br><span class="line">    name: datadir</span><br><span class="line">  spec:</span><br><span class="line">    storageClassName: disk</span><br><span class="line">    accessModes: [ &quot;ReadWriteOnce&quot; ]</span><br><span class="line">    resources:</span><br><span class="line">      requests:</span><br><span class="line">        storage: 3Gi</span><br></pre></td></tr></table></figure><blockquote><p>注意：如果是使用云服务商比如阿里云，要注意购买云盘要与node节点使用区一致</p></blockquote><h2 id="创建一个-ZooKeeper-Ensemble"><a href="#创建一个-ZooKeeper-Ensemble" class="headerlink" title="创建一个 ZooKeeper Ensemble"></a>创建一个 ZooKeeper Ensemble</h2><p>下面的清单包含一个 无头服务， 一个 Service， 一个 PodDisruptionBudget， 和一个 StatefulSet。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: zk-hs</span><br><span class="line">  labels:</span><br><span class="line">    app: zk</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - port: 2888</span><br><span class="line">    name: server</span><br><span class="line">  - port: 3888</span><br><span class="line">    name: leader-election</span><br><span class="line">  clusterIP: None</span><br><span class="line">  selector:</span><br><span class="line">    app: zk</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: zk-cs</span><br><span class="line">  labels:</span><br><span class="line">    app: zk</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">  - port: 2181</span><br><span class="line">    name: client</span><br><span class="line">  selector:</span><br><span class="line">    app: zk</span><br><span class="line">---</span><br><span class="line">apiVersion: policy&#x2F;v1beta1</span><br><span class="line">kind: PodDisruptionBudget</span><br><span class="line">metadata:</span><br><span class="line">  name: zk-pdb</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: zk</span><br><span class="line">  maxUnavailable: 1</span><br><span class="line">---</span><br><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: StatefulSet</span><br><span class="line">metadata:</span><br><span class="line">  name: zk</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: zk</span><br><span class="line">  serviceName: zk-hs</span><br><span class="line">  replicas: 3</span><br><span class="line">  updateStrategy:</span><br><span class="line">    type: RollingUpdate</span><br><span class="line">  podManagementPolicy: Parallel</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: zk</span><br><span class="line">    spec:</span><br><span class="line">      affinity:</span><br><span class="line">        podAntiAffinity:</span><br><span class="line">          requiredDuringSchedulingIgnoredDuringExecution:</span><br><span class="line">            - labelSelector:</span><br><span class="line">                matchExpressions:</span><br><span class="line">                  - key: &quot;app&quot;</span><br><span class="line">                    operator: In</span><br><span class="line">                    values:</span><br><span class="line">                    - zk</span><br><span class="line">              topologyKey: &quot;kubernetes.io&#x2F;hostname&quot;</span><br><span class="line">      containers:</span><br><span class="line">      - name: kubernetes-zookeeper</span><br><span class="line">        imagePullPolicy: Always</span><br><span class="line">        image: &quot;guglecontainers&#x2F;kubernetes-zookeeper:1.0-3.4.10&quot;</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            memory: &quot;1Gi&quot;</span><br><span class="line">            cpu: &quot;0.5&quot;</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 2181</span><br><span class="line">          name: client</span><br><span class="line">        - containerPort: 2888</span><br><span class="line">          name: server</span><br><span class="line">        - containerPort: 3888</span><br><span class="line">          name: leader-election</span><br><span class="line">        command:</span><br><span class="line">        - sh</span><br><span class="line">        - -c</span><br><span class="line">        - &quot;start-zookeeper \</span><br><span class="line">          --servers&#x3D;3 \</span><br><span class="line">          --data_dir&#x3D;&#x2F;var&#x2F;lib&#x2F;zookeeper&#x2F;data \</span><br><span class="line">          --data_log_dir&#x3D;&#x2F;var&#x2F;lib&#x2F;zookeeper&#x2F;data&#x2F;log \</span><br><span class="line">          --conf_dir&#x3D;&#x2F;opt&#x2F;zookeeper&#x2F;conf \</span><br><span class="line">          --client_port&#x3D;2181 \</span><br><span class="line">          --election_port&#x3D;3888 \</span><br><span class="line">          --server_port&#x3D;2888 \</span><br><span class="line">          --tick_time&#x3D;2000 \</span><br><span class="line">          --init_limit&#x3D;10 \</span><br><span class="line">          --sync_limit&#x3D;5 \</span><br><span class="line">          --heap&#x3D;512M \</span><br><span class="line">          --max_client_cnxns&#x3D;60 \</span><br><span class="line">          --snap_retain_count&#x3D;3 \</span><br><span class="line">          --purge_interval&#x3D;12 \</span><br><span class="line">          --max_session_timeout&#x3D;40000 \</span><br><span class="line">          --min_session_timeout&#x3D;4000 \</span><br><span class="line">          --log_level&#x3D;INFO&quot;</span><br><span class="line">        readinessProbe:</span><br><span class="line">          exec:</span><br><span class="line">            command:</span><br><span class="line">            - sh</span><br><span class="line">            - -c</span><br><span class="line">            - &quot;zookeeper-ready 2181&quot;</span><br><span class="line">          initialDelaySeconds: 10</span><br><span class="line">          timeoutSeconds: 5</span><br><span class="line">        livenessProbe:</span><br><span class="line">          exec:</span><br><span class="line">            command:</span><br><span class="line">            - sh</span><br><span class="line">            - -c</span><br><span class="line">            - &quot;zookeeper-ready 2181&quot;</span><br><span class="line">          initialDelaySeconds: 10</span><br><span class="line">          timeoutSeconds: 5</span><br><span class="line">        volumeMounts:</span><br><span class="line">        - name: datadir</span><br><span class="line">          mountPath: &#x2F;var&#x2F;lib&#x2F;zookeeper</span><br><span class="line">      securityContext:</span><br><span class="line">        runAsUser: 1000</span><br><span class="line">        fsGroup: 1000</span><br><span class="line">  volumeClaimTemplates:</span><br><span class="line">  - metadata:</span><br><span class="line">      name: datadir</span><br><span class="line">      annotations:</span><br><span class="line">        volume.beta.kubernetes.io&#x2F;storage-class: &quot;anything&quot;</span><br><span class="line">    spec:</span><br><span class="line">      accessModes: [ &quot;ReadWriteOnce&quot; ]</span><br><span class="line">      resources:</span><br><span class="line">        requests:</span><br><span class="line">          storage: 3Gi</span><br></pre></td></tr></table></figure><h2 id="开始创建"><a href="#开始创建" class="headerlink" title="开始创建"></a>开始创建</h2><p>创建了 zk-hs 无头服务、zk-cs 服务、zk-pdb PodDisruptionBudget 和 zk StatefulSet。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f zookeeper.yml --namespace&#x3D;zookeeper</span><br><span class="line"></span><br><span class="line">service&#x2F;zk-hs created</span><br><span class="line">service&#x2F;zk-cs created</span><br><span class="line">poddisruptionbudget.policy&#x2F;zk-pdb created</span><br><span class="line">statefulset.apps&#x2F;zk created</span><br></pre></td></tr></table></figure><h3 id="状态查询"><a href="#状态查询" class="headerlink" title="状态查询"></a>状态查询</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">kubectl get poddisruptionbudgets -n zookeeper</span><br><span class="line">kubectl get pods -n zookeeper</span><br><span class="line">kubectl get pods -n zookeeper -w -l app&#x3D;zk</span><br></pre></td></tr></table></figure><h3 id="如果发现没有启动pod"><a href="#如果发现没有启动pod" class="headerlink" title="如果发现没有启动pod"></a>如果发现没有启动pod</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">kubectl logs zk-0 -n zookeeper</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/zookeeper5.jpg" alt="github--lena"></p><p>没有权限没有办法创建目录<br>没有zookeeper用户<br>创建一下并给个权限</p><h3 id="创建用户以及授权"><a href="#创建用户以及授权" class="headerlink" title="创建用户以及授权"></a>创建用户以及授权</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">useradd -s &#x2F;sbin&#x2F;nologin zookeeper</span><br><span class="line"></span><br><span class="line">chown zookeeper.zookeeper &#x2F;var&#x2F;lib&#x2F;zookeeper&#x2F;</span><br></pre></td></tr></table></figure><blockquote><p>【注意】{每个安装zk的机器都要执行创建用户以及授权}</p></blockquote><p>如果你是k8s三节点，请注意：</p><p>出于安全考虑Pod不会被调度到Master Node上，也就是说Master Node不参与工作负载</p><p>如果希望master进行调度<br>使用污点（taints）与容忍（tolerations）进行调整</p><h2 id="促成-Leader-选举"><a href="#促成-Leader-选举" class="headerlink" title="促成 Leader 选举"></a>促成 Leader 选举</h2><p>获取 zk StatefulSet 中 Pods 的主机名。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">for i in 0 1 2; do kubectl exec --namespace zookeeper zk-$i -- hostname; done</span><br></pre></td></tr></table></figure><p>看一下效果是不是集群模式</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">for i in 0 1 2; do kubectl exec --namespace zookeeper  zk-$i zkServer.sh status; done</span><br></pre></td></tr></table></figure><p>检查每个服务器的 myid 文件的内容</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">for i in 0 1 2; do echo &quot;myid zk-$i&quot;;kubectl exec --namespace zookeeper  zk-$i -- cat &#x2F;var&#x2F;lib&#x2F;zookeeper&#x2F;data&#x2F;myid; done</span><br></pre></td></tr></table></figure><p>获取 zk StatefulSet 中每个 Pod 的全限定域名</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">for i in 0 1 2; do kubectl exec --namespace zookeeper zk-$i -- hostname -f; done</span><br></pre></td></tr></table></figure><p>Pod 中查看 zoo.cfg 文件的内容。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl exec --namespace zookeeper zk-0 -- cat &#x2F;opt&#x2F;zookeeper&#x2F;conf&#x2F;zoo.cfg</span><br></pre></td></tr></table></figure><h2 id="Ensemble-健康检查"><a href="#Ensemble-健康检查" class="headerlink" title="Ensemble 健康检查"></a>Ensemble 健康检查</h2><p>最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据，然后从 另一个服务器读取这些数据</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">kubectl exec --namespace zookeeper zk-0 zkCli.sh create &#x2F;hello world</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">WATCHER::</span><br><span class="line"></span><br><span class="line">WatchedEvent state:SyncConnected type:None path:null</span><br><span class="line">Created &#x2F;hello</span><br></pre></td></tr></table></figure><p>从 zk-1 Pod 获取数据。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">kubectl exec  --namespace zookeeper  zk-1 zkCli.sh get &#x2F;hello</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">WATCHER::</span><br><span class="line"></span><br><span class="line">WatchedEvent state:SyncConnected type:None path:null</span><br><span class="line">world</span><br><span class="line">cZxid &#x3D; 0x100000014</span><br><span class="line">ctime &#x3D; Thu Mar 18 03:21:38 UTC 2021</span><br><span class="line">mZxid &#x3D; 0x100000014</span><br><span class="line">mtime &#x3D; Thu Mar 18 03:21:38 UTC 2021</span><br><span class="line">pZxid &#x3D; 0x100000014</span><br><span class="line">cversion &#x3D; 0</span><br><span class="line">dataVersion &#x3D; 0</span><br><span class="line">aclVersion &#x3D; 0</span><br><span class="line">ephemeralOwner &#x3D; 0x0</span><br><span class="line">dataLength &#x3D; 5</span><br><span class="line">numChildren &#x3D; 0</span><br></pre></td></tr></table></figure><p>如果出现<code>myid</code>重复可以进入node内<code>/var/lib/zookeeper/data/</code> 下 修改<code>id</code>参数,然后重新部署</p><blockquote><p>参考：<a href="https://kubernetes.io/zh/docs/tutorials/stateful-application/zookeeper/">https://kubernetes.io/zh/docs/tutorials/stateful-application/zookeeper/</a></p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;创建存储卷&quot;&gt;&lt;a href=&quot;#创建存储卷&quot; class=&quot;headerlink&quot; title=&quot;创建存储卷&quot;&gt;&lt;/a&gt;创建存储卷&lt;/h2&gt;&lt;p&gt;Zookeeper集群需要用到存储，这里需要准备持久卷（PersistentVolume，简称PV），我这里以yam</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="zookeeper" scheme="https://imszz.com/tags/zookeeper/"/>
    
  </entry>
  
  <entry>
    <title>k8s 跨 namespace 访问服务</title>
    <link href="https://imszz.com/p/9225747c/"/>
    <id>https://imszz.com/p/9225747c/</id>
    <published>2021-03-30T06:00:25.000Z</published>
    <updated>2021-03-30T06:01:25.000Z</updated>
    
    <content type="html"><![CDATA[<p>在K8S中，同一个命名空间<code>（namespace）</code>下的服务之间调用，之间通过服务名<code>（service name）</code>调用即可。不过在更多时候，我们可能会将一些服务单独隔离在一个命名空间中（比如我们将中间件服务统一放在 middleware 命名空间中，将业务服务放在 business 命名空间中）。 遇到这种情况，我们就需要跨命名空间访问，K8S 对service 提供了四种不同的类型，针对这个问题我们选用 <code>ExternalName</code> 类型的 service 即可。</p><p>k8s service 分为四种类型<br>分别为：</p><ul><li>ClusterIp（默认类型，每个Node分配一个集群内部的Ip，内部可以互相访问，外部无法访问集群内部）</li><li>NodePort（基于ClusterIp，另外在每个Node上开放一个端口，可以从所有的位置访问这个地址）</li><li>LoadBalance（基于NodePort，并且有云服务商在外部创建了一个负载均衡层，将流量导入到对应Port。要收费的，一般由云服务商提供，比如阿里云、AWS等均提供这种服务）</li><li>ExternalName（将外部地址经过集群内部的再一次封装，实际上就是集群DNS服务器将CNAME解析到了外部地址上，实现了集群内部访问）</li></ul><p>本文使用 <code>ExternalName</code> 实现我们的需求：</p><p>通过 <code>&#123;SERVICE_NAME&#125;.&#123;NAMESPACE_NAME&#125;.svc.cluster.local</code>这样的格式，访问目标 <code>namespace</code> 下的服务。</p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;在K8S中，同一个命名空间&lt;code&gt;（namespace）&lt;/code&gt;下的服务之间调用，之间通过服务名&lt;code&gt;（service name）&lt;/code&gt;调用即可。不过在更多时候，我们可能会将一些服务单独隔离在一个命名空间中（比如我们将中间件服务统一放在 middle</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="namespace" scheme="https://imszz.com/tags/namespace/"/>
    
  </entry>
  
  <entry>
    <title>污点（taints）与容忍（tolerations）</title>
    <link href="https://imszz.com/p/5cb0c128/"/>
    <id>https://imszz.com/p/5cb0c128/</id>
    <published>2021-03-30T06:00:25.000Z</published>
    <updated>2021-03-30T06:01:25.000Z</updated>
    
    <content type="html"><![CDATA[<p>对于<code>nodeAffinity</code>无论是硬策略还是软策略方式，都是调度 pod 到预期节点上，而<code>Taints</code>恰好与之相反，如果一个节点标记为 Taints ，除非 pod 也被标识为可以容忍污点节点，否则该 Taints 节点不会被调度 pod。</p><p>比如用户希望把 Master 节点保留给 Kubernetes 系统组件使用，或者把一组具有特殊资源预留给某些 pod，则污点就很有用了，pod 不会再被调度到 taint 标记过的节点。我们搭建的集群默认就给 master 节点添加了一个污点标记，所以我们看到我们平时的 pod 都没有被调度到 master 上去：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">$ kubectl describe node master</span><br><span class="line">Name:               master</span><br><span class="line">Roles:              master</span><br><span class="line">Labels:             beta.kubernetes.io&#x2F;arch&#x3D;amd64</span><br><span class="line">                    beta.kubernetes.io&#x2F;os&#x3D;linux</span><br><span class="line">                    kubernetes.io&#x2F;hostname&#x3D;master</span><br><span class="line">                    node-role.kubernetes.io&#x2F;master&#x3D;</span><br><span class="line">......</span><br><span class="line">Taints:             node-role.kubernetes.io&#x2F;master:NoSchedule</span><br><span class="line">Unschedulable:      false</span><br><span class="line">......</span><br></pre></td></tr></table></figure><p>我们可以使用上面的命令查看 master 节点的信息，其中有一条关于 Taints 的信息：<code>node-role.kubernetes.io/master:NoSchedule</code>，就表示给 master 节点打了一个污点的标记，其中影响的参数是<code>NoSchedule</code>，表示 pod 不会被调度到标记为 taints 的节点，除了 NoSchedule 外，还有另外两个选项：</p><ul><li>PreferNoSchedule：NoSchedule 的软策略版本，表示尽量不调度到污点节点上去</li><li>NoExecute：该选项意味着一旦 Taint 生效，如该节点内正在运行的 pod 没有对应 Tolerate 设置，会直接被逐出</li></ul><p>污点 taint 标记节点的命令如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">$ kubectl taint nodes node02 test&#x3D;node02:NoSchedule</span><br><span class="line">node &quot;node02&quot; tainted</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>上面的命名将 node02 节点标记为了污点，影响策略是 NoSchedule，只会影响新的 pod 调度，如果仍然希望某个 pod 调度到 taint 节点上，则必须在 Spec 中做出<code>Toleration</code>定义，才能调度到该节点，<br>比如现在我们想要将一个 pod 调度到 master 节点：(taint-demo.yaml)</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line">apiVersion: apps&#x2F;v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  name: taint</span><br><span class="line">  labels:</span><br><span class="line">    app: taint</span><br><span class="line">spec:</span><br><span class="line">  replicas: 3</span><br><span class="line">  revisionHistoryLimit: 10</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: taint</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: taint</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: nginx</span><br><span class="line">        image: nginx:1.7.9</span><br><span class="line">        ports:</span><br><span class="line">        - name: http</span><br><span class="line">          containerPort: 80</span><br><span class="line">      tolerations:</span><br><span class="line">      - key: &quot;node-role.kubernetes.io&#x2F;master&quot;</span><br><span class="line">        operator: &quot;Exists&quot;</span><br><span class="line">        effect: &quot;NoSchedule&quot;</span><br></pre></td></tr></table></figure><p>由于 master 节点被标记为了污点节点，所以我们这里要想 pod 能够调度到 master 节点去，就需要增加容忍的声明：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">tolerations:</span><br><span class="line">- key: &quot;node-role.kubernetes.io&#x2F;master&quot;</span><br><span class="line">  operator: &quot;Exists&quot;</span><br><span class="line">  effect: &quot;NoSchedule&quot;</span><br></pre></td></tr></table></figure><p>然后创建上面的资源，查看结果：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">$ kubectl create -f taint-demo.yaml</span><br><span class="line">deployment.apps &quot;taint&quot; created</span><br><span class="line">$ kubectl get pods -o wide</span><br><span class="line">NAME                                      READY     STATUS             RESTARTS   AGE       IP             NODE</span><br><span class="line">......</span><br><span class="line">taint-845d8bb4fb-57mhm                    1&#x2F;1       Running            0          1m        10.244.4.247   node02</span><br><span class="line">taint-845d8bb4fb-bbvmp                    1&#x2F;1       Running            0          1m        10.244.0.33    master</span><br><span class="line">taint-845d8bb4fb-zb78x                    1&#x2F;1       Running            0          1m        10.244.4.246   node02</span><br><span class="line">......</span><br></pre></td></tr></table></figure><p>我们可以看到有一个 pod 副本被调度到了 master 节点，这就是容忍的使用方法。</p><p>对于 tolerations 属性的写法，其中的 key、value、effect 与 Node 的 Taint 设置需保持一致， 还有以下几点说明：</p><ul><li>如果 operator 的值是 Exists，则 value 属性可省略</li><li>如果 operator 的值是 Equal，则表示其 key 与 value 之间的关系是 equal(等于)</li><li>如果不指定 operator 属性，则默认值为 Equal</li></ul><p>另外，还有两个特殊值：</p><ul><li>空的 key 如果再配合 Exists 就能匹配所有的 key 与 value，也是是能容忍所有 node 的所有 Taints</li><li>空的 effect 匹配所有的 effect</li></ul><p>最后，如果我们要取消节点的污点标记，可以使用下面的命令：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">$ kubectl taint nodes node02 test-</span><br><span class="line">node &quot;node02&quot; untainted</span><br></pre></td></tr></table></figure><p>这就是污点和容忍的使用方法。</p>]]></content>
    
    
      
      
    <summary type="html">&lt;p&gt;对于&lt;code&gt;nodeAffinity&lt;/code&gt;无论是硬策略还是软策略方式，都是调度 pod 到预期节点上，而&lt;code&gt;Taints&lt;/code&gt;恰好与之相反，如果一个节点标记为 Taints ，除非 pod 也被标识为可以容忍污点节点，否则该 Taints 节点不</summary>
      
    
    
    
    <category term="Kubernetes" scheme="https://imszz.com/categories/Kubernetes/"/>
    
    
    <category term="Kubernetes" scheme="https://imszz.com/tags/Kubernetes/"/>
    
    <category term="taints" scheme="https://imszz.com/tags/taints/"/>
    
    <category term="tolerations" scheme="https://imszz.com/tags/tolerations/"/>
    
  </entry>
  
  <entry>
    <title>删除mac启动台launchpad中的无效图标</title>
    <link href="https://imszz.com/p/c1a54034/"/>
    <id>https://imszz.com/p/c1a54034/</id>
    <published>2021-03-28T16:25:23.000Z</published>
    <updated>2021-03-28T16:26:23.000Z</updated>
    
    <content type="html"><![CDATA[<h3 id="第一种情况"><a href="#第一种情况" class="headerlink" title="第一种情况"></a>第一种情况</h3><p>在Mac上安装Photoshop CS6的后， 启动台(LaunchPad)莫名其妙的多出了几个”Adobe xxxx…”的图标， 而且无法删除，在访达里面应用程序内也找不到， 非常讨厌。</p><p>在网上搜索了试过终端删除，app删除，找到程序文件夹删除等各种方法，但都失败了。。。</p><p>最后重点来了，我找到了一个终极解决办法：</p><p>重建 启动台(LaunchPad) 内的图标来解决.</p><h4 id="方法如下"><a href="#方法如下" class="headerlink" title="方法如下:"></a>方法如下:</h4><p>打开应用程序- 实用工具 - 终端. 以此出入如下命令：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">defaults write com.apple.dock ResetLaunchPad -bool true</span><br><span class="line"></span><br><span class="line">killall Dock</span><br></pre></td></tr></table></figure><p>再次打开 LaunchPad 的时候, 所有图标会被重建。</p><blockquote><p>如果发现启动台(LaunchPad)里面出现了一个新的相关文件夹，并且是原来Adobe之类的程序， 那么需要再次打开<code>访达-&gt;应用程序-&gt;实用工具</code> 内找到对应相关文件程序删掉即可。</p></blockquote><p>最后，你会发现重置之后之前的所有设置都会丢失. 没有特殊情况不要使用哦. 以免丢失之前的排列方式与文件夹.</p><h3 id="第二种情况"><a href="#第二种情况" class="headerlink" title="第二种情况"></a>第二种情况</h3><p>有些应用程序（比如说虚拟机），安装之后会在启动台生成文件夹或其它图标，但是卸载了应用之后，这个文件夹依然会保留下来，简直逼死强迫症。</p><h4 id="方法如下-1"><a href="#方法如下-1" class="headerlink" title="方法如下:"></a>方法如下:</h4><p>卸载应用程序之后，一般其在启动台生成的文件夹是不会被删除的，不过这个文件夹里面是空的。如果执意要删除的话，可以从<code>Finder</code>（访达）里面入手。具体操作为，打开访达，按下快捷键<code>「commond」+「shift」+「H」</code>，之后页面会自动跳转到用户的主页。打开<code>「应用程序文件夹」</code>，里面的都是launchpad的内容，找到你要删除的目标将其删除即可。</p>]]></content>
    
    
      
      
    <summary type="html">&lt;h3 id=&quot;第一种情况&quot;&gt;&lt;a href=&quot;#第一种情况&quot; class=&quot;headerlink&quot; title=&quot;第一种情况&quot;&gt;&lt;/a&gt;第一种情况&lt;/h3&gt;&lt;p&gt;在Mac上安装Photoshop CS6的后， 启动台(LaunchPad)莫名其妙的多出了几个”Adobe xxx</summary>
      
    
    
    
    <category term="奇怪的知识点" scheme="https://imszz.com/categories/%E5%A5%87%E6%80%AA%E7%9A%84%E7%9F%A5%E8%AF%86%E7%82%B9/"/>
    
    
    <category term="Mac" scheme="https://imszz.com/tags/Mac/"/>
    
  </entry>
  
  <entry>
    <title>如何注册PropellerAds账号</title>
    <link href="https://imszz.com/p/399b4ecc/"/>
    <id>https://imszz.com/p/399b4ecc/</id>
    <published>2021-02-08T16:46:25.000Z</published>
    <updated>2021-02-08T16:48:26.000Z</updated>
    
    <content type="html"><![CDATA[<h2 id="PropellerAds"><a href="#PropellerAds" class="headerlink" title="PropellerAds"></a><a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a></h2><blockquote><p><a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>是2018-2019年度最好的cpm广告网络之一，也是支付率最高的cpm广告网络之一。如果您正在寻找移动广告，弹出窗口，对话框和插页式广告，那么PorpellerAds是您最适合的CPM网络。出版商将获得10美元的有效每千次展示费用，这个每千次展示费率取决于访问国家，如果您的网站拥有高流量来自英国，美国，那么您可以预期这个广告网络很多钱。它提供了许多广告格式供用户赚取，这些广告格式是横幅广告，原生直接广告，流行下广告，非页内广告，上推广告，对话广告。螺旋桨广告支付净30基础。最低支付限额为100美元，发布可以通过电汇和PayPal提款。</p></blockquote><h3 id="获得批准的要求："><a href="#获得批准的要求：" class="headerlink" title="获得批准的要求："></a>获得批准的要求：</h3><ul><li>没有最低流量要求</li><li>网站必须是基于内容的，而不是简单的链接或广告列表</li><li>网站不得在“正在建设中”</li><li>网站不得包含与成人相关的内容</li></ul><h3 id="最好的功能"><a href="#最好的功能" class="headerlink" title="最好的功能"></a>最好的功能</h3><ul><li><a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>在Net 30上支付</li><li>其最低支付限额是$ 5</li><li>实时统计报告系统</li><li>付款方式是电汇和PayPal</li><li><a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>提供多种广告格式</li></ul><p>支持国内IP,<a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>本身有banner和弹窗广告 , 但是banner广告收入极低 , 所以不建议去做 反而弹窗收入高（垃圾站点使用高）</p><h2 id="首先我们注册PropellerAds平台"><a href="#首先我们注册PropellerAds平台" class="headerlink" title="首先我们注册PropellerAds平台"></a>首先我们注册<a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>平台</h2><p>链接地址<a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a> </p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/PropellerAds1.jpg" alt="github--lena"></p><p>我们选择账户类型为Publisher，注意这里我们注册为发行商，一定不要选错了<br>提供广告的请注册Advertiser，</p><p>跳转到这个页面</p><p>据实填写我们的个人信息即可，填写完成以后点击下一步 ,只填写必要信息即可<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads2.jpg" alt="github--lena"></p><p>点击下一页后在相关的输入框中大家可以根据我填写的内容来进行填写，这里其实只需要简单的说明一下我们目前的流量源</p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads4.jpg" alt="github--lena"></p><p>最后点击注册就可以了，基本上注册以后我很快会收到确认邮件，当即注册马上就能进入平台了</p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads5.jpg" alt="github--lena"></p><p>在你的邮箱中收到这份确认邮件以后点击验证账户，然后会跳转至设置初始密码的页面，设置完成以后就ok了，恭喜你，</p><h2 id="绑定网站与验证"><a href="#绑定网站与验证" class="headerlink" title="绑定网站与验证"></a>绑定网站与验证</h2><p>添加网站<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads6.jpg" alt="github--lena"></p><p>验证<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads7.jpg" alt="github--lena"></p><p>验证通过后添加广告类别</p><p><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads8.jpg" alt="github--lena"></p><p>选择自己适用的类别<br>add zone<br>点击获取代码并选择在自己的官网手动引用就可以<br><img src= "/img/loading.gif" data-lazy-src="https://cdn.jsdelivr.net/gh/weilain/cdn-photo/img/propellerads9.jpg" alt="github--lena"></p><p>请注意：MultiTag 广告格式包含（In-Page Push (Banner)与Onclick (Popunder)与Interstitial）</p><p>不太建议直接使用MultiTag与Onclick (Popunder) 这两种广告格式 因为会跳转到其他网站，可能会包含非法站点</p><h2 id="请点击PropellerAds跳转官网注册"><a href="#请点击PropellerAds跳转官网注册" class="headerlink" title="请点击PropellerAds跳转官网注册"></a>请点击<a href="https://propellerads.com/publishers/?ref_id=mTFx">PropellerAds</a>跳转官网注册</h2>]]></content>
    
    
      
      
    <summary type="html">&lt;h2 id=&quot;PropellerAds&quot;&gt;&lt;a href=&quot;#PropellerAds&quot; class=&quot;headerlink&quot; title=&quot;PropellerAds&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://propellerads.com/publishers/?ref_</summary>
      
    
    
    
    <category term="奇怪的知识点" scheme="https://imszz.com/categories/%E5%A5%87%E6%80%AA%E7%9A%84%E7%9F%A5%E8%AF%86%E7%82%B9/"/>
    
    
    <category term="PropellerAds" scheme="https://imszz.com/tags/PropellerAds/"/>
    
  </entry>
  
  <entry>
    <title>Linux设置和修改时间与时区</title>
    <link href="https://imszz.com/p/339c428/"/>
    <id>https://imszz.com/p/339c428/</id>
    <published>2021-01-26T16:00:00.000Z</published>
    <updated>2021-01-27T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>linux系统时间有两个，一个是硬件时间，即BIOS时间，就是我们进行CMOS设置时看到的时间，另一个是系统时间，是linux系统Kernel时间。当Linux启动时，系统Kernel会去读取硬件时钟的设置，然后系统时钟就会独立于硬件运作。有时我们会发现系统时钟和硬件时钟不一致，因此需要执行时间同步。</p></blockquote><h1 id="方法一"><a href="#方法一" class="headerlink" title="方法一"></a>方法一</h1><h2 id="一、date-查看-设置系统时间"><a href="#一、date-查看-设置系统时间" class="headerlink" title="一、date 查看/设置系统时间"></a>一、date 查看/设置系统时间</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">1、将日期设置为2017年11月3日</span><br><span class="line">[root@linux-node ~]# date -s 11&#x2F;03&#x2F;17</span><br><span class="line"></span><br><span class="line">2、将时间设置为14点20分50秒</span><br><span class="line">[root@linux-node ~]# date -s 14:20:50</span><br><span class="line"></span><br><span class="line">3、将时间设置为2017年11月3日14点16分30秒（MMDDhhmmYYYY.ss）</span><br><span class="line">[root@linux-node ~]# date 1103141617.30</span><br></pre></td></tr></table></figure><h2 id="二、hwclock-clock-查看-设置硬件时间"><a href="#二、hwclock-clock-查看-设置硬件时间" class="headerlink" title="二、hwclock/clock 查看/设置硬件时间"></a>二、hwclock/clock 查看/设置硬件时间</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">1、查看系统硬件时钟</span><br><span class="line">[root@linux-node ~]# hwclock  --show 或者</span><br><span class="line">[root@linux-node ~]# clock  --show</span><br><span class="line"></span><br><span class="line">2、设置硬件时间</span><br><span class="line">[root@linux-node ~]# hwclock --set --date&#x3D;&quot;11&#x2F;03&#x2F;17 14:55&quot; （月&#x2F;日&#x2F;年时:分:秒） 或者</span><br><span class="line">[root@linux-node ~]# clock --set --date&#x3D;&quot;11&#x2F;03&#x2F;17 14:55&quot; （月&#x2F;日&#x2F;年时:分:秒）</span><br></pre></td></tr></table></figure><h2 id="三、同步系统及硬件时钟"><a href="#三、同步系统及硬件时钟" class="headerlink" title="三、同步系统及硬件时钟"></a>三、同步系统及硬件时钟</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@linux-node ~]# hwclock --hctosys 或者</span><br><span class="line">[root@linux-node ~]# clock --hctosys  </span><br><span class="line">备注：hc代表硬件时间，sys代表系统时间，以硬件时间为基准，系统时间找硬件时间同步</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">[root@linux-node ~]# hwclock --systohc或者</span><br><span class="line">[root@linux-node ~]# clock --systohc </span><br><span class="line">备注：以系统时间为基准，硬件时间找系统时间同步</span><br></pre></td></tr></table></figure><h1 id="方法二"><a href="#方法二" class="headerlink" title="方法二"></a>方法二</h1><p>时区设置用<code>tzselect</code> 命令来实现。但是通过<code>tzselect</code>命令设置<code>TZ</code>这个环境变量来选择的时区，需要将变量添加到<code>.profile</code>文件中。</p><h2 id="一、tzselect命令执行"><a href="#一、tzselect命令执行" class="headerlink" title="一、tzselect命令执行"></a>一、tzselect命令执行</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">执行tzselect命令 --&gt; 选择Asia --&gt; 选择China --&gt; 选择east China - Beijing, Guangdong, Shanghai, etc--&gt;然后输入1。</span><br></pre></td></tr></table></figure><p>执行完<code>tzselect</code>命令选择时区后，时区并没有更改，只是在命令最后提示你可以执行 <code>TZ=’Asia/Shanghai’; export TZ </code>并将这行命令添加到<code>.profile</code>中，然后退出并重新登录。</p><h2 id="二、修改配置文件来修改时区"><a href="#二、修改配置文件来修改时区" class="headerlink" title="二、修改配置文件来修改时区"></a>二、修改配置文件来修改时区</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@linux-node ~]# echo &quot;ZONE&#x3D;Asia&#x2F;Shanghai&quot; &gt;&gt; &#x2F;etc&#x2F;sysconfig&#x2F;clock         </span><br><span class="line">[root@linux-node ~]# rm -f &#x2F;etc&#x2F;localtime</span><br><span class="line">#链接到上海时区文件       </span><br><span class="line">[root@linux-node ~]# ln -sf &#x2F;usr&#x2F;share&#x2F;zoneinfo&#x2F;Asia&#x2F;Shanghai &#x2F;etc&#x2F;localtime</span><br></pre></td></tr></table></figure><p>执行完上述过程后，重启机器，即可看到时区已经更改。</p><h2 id="备注："><a href="#备注：" class="headerlink" title="备注："></a>备注：</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">在centos7中设置时区的命令可以通过 timedatectl 命令来实现</span><br><span class="line">[root@linux-node ~]# timedatectl set-timezone Asia&#x2F;Shanghai</span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;blockquote&gt;
&lt;p&gt;linux系统时间有两个，一个是硬件时间，即BIOS时间，就是我们进行CMOS设置时看到的时间，另一个是系统时间，是linux系统Kernel时间。当Linux启动时，系统Kernel会去读取硬件时钟的设置，然后系统时钟就会独立于硬件运作。有时我们</summary>
      
    
    
    
    <category term="linux" scheme="https://imszz.com/categories/linux/"/>
    
    
    <category term="linux" scheme="https://imszz.com/tags/linux/"/>
    
    <category term="TZ" scheme="https://imszz.com/tags/TZ/"/>
    
  </entry>
  
  <entry>
    <title>MySQL5.7 字符集设置</title>
    <link href="https://imszz.com/p/38510659/"/>
    <id>https://imszz.com/p/38510659/</id>
    <published>2021-01-25T16:00:00.000Z</published>
    <updated>2021-01-26T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h1 id="MySQL5-7-字符集设置"><a href="#MySQL5-7-字符集设置" class="headerlink" title="MySQL5.7 字符集设置"></a>MySQL5.7 字符集设置</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">character-set-client-handshake &#x3D; FALSE</span><br><span class="line">character-set-server &#x3D; utf8mb4</span><br><span class="line">collation-server &#x3D; utf8mb4_unicode_ci</span><br><span class="line">init_connect&#x3D;’SET NAMES utf8mb4’</span><br></pre></td></tr></table></figure><h1 id="character-set-client-handshake"><a href="#character-set-client-handshake" class="headerlink" title="character-set-client-handshake"></a>character-set-client-handshake</h1><p>用来控制客户端声明使用字符集和服务端声明使用的字符集在不一致的情况下的兼容性.</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">character-set-client-handshake &#x3D; false</span><br><span class="line"># 设置为 False, 在客户端字符集和服务端字符集不同的时候将拒绝连接到服务端执行任何操作</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># 默认为 true</span><br><span class="line">character-set-client-handshake &#x3D; true</span><br><span class="line"># 设置为 True, 即使客户端字符集和服务端字符集不同, 也允许客户端连接</span><br></pre></td></tr></table></figure><h1 id="character-set-server"><a href="#character-set-server" class="headerlink" title="character-set-server"></a>character-set-server</h1><p>声明服务端的字符编码, 推荐使用utf8mb4 , 该字符虽然占用空间会比较大, 但是可以兼容 emoji 😈 表情的存储</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">character-set-server &#x3D; utf8mb4</span><br></pre></td></tr></table></figure><h1 id="collation-server"><a href="#collation-server" class="headerlink" title="collation-server"></a>collation-server</h1><p>声明服务端的字符集, 字符编码和字符集一一对应, 既然使用了utf8mb4的字符集, 就要声明使用对应的字符编码</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">collation-server &#x3D; utf8mb4_unicode_ci</span><br></pre></td></tr></table></figure><h1 id="init-connect"><a href="#init-connect" class="headerlink" title="init_connect"></a>init_connect</h1><p><code>init_connect </code>是用户登录到数据库上之后, 在执行第一次查询之前执行里面的内容. 如果 <code>init_connect</code> 的内容有语法错误, 导致执行失败, 会导致用户无法执行查询, 从mysql 退出</p><p>使用 <code>init_connect</code> 执行 <code>SET NAMES utf8mb4</code> 意为:</p><p>声明自己(客户端)使用的是 utf8mb4 的字符编码<br>希望服务器返回给自己 utf8mb4 的查询结果</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">init_connect &#x3D; &#39;SET NAMES utf8mb4&#39;</span><br></pre></td></tr></table></figure><h1 id="完整配置"><a href="#完整配置" class="headerlink" title="完整配置"></a>完整配置</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">character-set-client-handshake &#x3D; FALSE</span><br><span class="line">character-set-server &#x3D; utf8mb4</span><br><span class="line">collation-server &#x3D; utf8mb4_unicode_ci</span><br><span class="line">init_connect &#x3D; &#39;SET NAMES utf8mb4&#39;</span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;h1 id=&quot;MySQL5-7-字符集设置&quot;&gt;&lt;a href=&quot;#MySQL5-7-字符集设置&quot; class=&quot;headerlink&quot; title=&quot;MySQL5.7 字符集设置&quot;&gt;&lt;/a&gt;MySQL5.7 字符集设置&lt;/h1&gt;&lt;figure class=&quot;highlight </summary>
      
    
    
    
    <category term="Mysql" scheme="https://imszz.com/categories/Mysql/"/>
    
    
    <category term="mysql" scheme="https://imszz.com/tags/mysql/"/>
    
  </entry>
  
  <entry>
    <title>MySQL5.7 高可用高性能配置调优 性能参数参考</title>
    <link href="https://imszz.com/p/2d27b747/"/>
    <id>https://imszz.com/p/2d27b747/</id>
    <published>2021-01-25T16:00:00.000Z</published>
    <updated>2021-01-26T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>MySQL5.7 在 5.6 版本的基础之上做了大量的优化, 本篇文章开篇将重点围绕经过优化的基于 GTID 的多线程复制和半同步复制的特性介绍, 后续会持续增加 MySQL5.7 的调优参数</p></blockquote><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br></pre></td><td class="code"><pre><span class="line">[client]</span><br><span class="line">default-character-set &#x3D; utf8mb4</span><br><span class="line"></span><br><span class="line">[mysqld]</span><br><span class="line"></span><br><span class="line">### 基本属性配置</span><br><span class="line">port &#x3D; 3306</span><br><span class="line">datadir&#x3D;&#x2F;data&#x2F;mysql</span><br><span class="line"># 禁用主机名解析</span><br><span class="line">skip-name-resolve</span><br><span class="line"># 默认的数据库引擎</span><br><span class="line">default-storage-engine &#x3D; InnoDB</span><br><span class="line"></span><br><span class="line">### 字符集配置</span><br><span class="line">character-set-client-handshake &#x3D; FALSE</span><br><span class="line">character-set-server &#x3D; utf8mb4</span><br><span class="line">collation-server &#x3D; utf8mb4_unicode_ci</span><br><span class="line">init_connect&#x3D;&#39;SET NAMES utf8mb4&#39;</span><br><span class="line"></span><br><span class="line">### GTID</span><br><span class="line">server_id &#x3D; 59</span><br><span class="line"># 为保证 GTID 复制的稳定, 行级日志</span><br><span class="line">binlog_format &#x3D; row</span><br><span class="line"># 开启 gtid 功能</span><br><span class="line">gtid_mode &#x3D; on</span><br><span class="line"># 保障 GTID 事务安全</span><br><span class="line"># 当启用enforce_gtid_consistency功能的时候,</span><br><span class="line"># MySQL只允许能够保障事务安全, 并且能够被日志记录的SQL语句被执行,</span><br><span class="line"># 像create table ... select 和 create temporarytable语句, </span><br><span class="line"># 以及同时更新事务表和非事务表的SQL语句或事务都不允许执行</span><br><span class="line">enforce-gtid-consistency &#x3D; true</span><br><span class="line"># 以下两条配置为主从切换, 数据库高可用的必须配置</span><br><span class="line"># 开启 binlog 日志功能</span><br><span class="line">log_bin &#x3D; on</span><br><span class="line"># 开启从库更新 binlog 日志</span><br><span class="line">log-slave-updates &#x3D; on</span><br><span class="line"></span><br><span class="line">### 慢查询日志</span><br><span class="line"># 打开慢查询日志功能</span><br><span class="line">slow_query_log &#x3D; 1</span><br><span class="line"># 超过2秒的查询记录下来</span><br><span class="line">long_query_time &#x3D; 2</span><br><span class="line"># 记录下没有使用索引的查询</span><br><span class="line">log_queries_not_using_indexes &#x3D; 1</span><br><span class="line"></span><br><span class="line">### 自动修复</span><br><span class="line"># 记录 relay.info 到数据表中</span><br><span class="line">relay_log_info_repository &#x3D; TABLE</span><br><span class="line"># 记录 master.info 到数据表中 </span><br><span class="line">master_info_repository &#x3D; TABLE</span><br><span class="line"># 启用 relaylog 的自动修复功能</span><br><span class="line">relay_log_recovery &#x3D; on</span><br><span class="line"># 在 SQL 线程执行完一个 relaylog 后自动删除</span><br><span class="line">relay_log_purge &#x3D; 1</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">### 数据安全性配置</span><br><span class="line"># 关闭 master 创建 function 的功能</span><br><span class="line">log_bin_trust_function_creators &#x3D; off</span><br><span class="line"># 每执行一个事务都强制写入磁盘</span><br><span class="line">sync_binlog &#x3D; 1</span><br><span class="line"># timestamp 列如果没有显式定义为 not null, 则支持null属性</span><br><span class="line"># 设置 timestamp 的列值为 null, 不会被设置为 current timestamp</span><br><span class="line">explicit_defaults_for_timestamp&#x3D;true</span><br><span class="line"></span><br><span class="line">### 优化配置</span><br><span class="line"># 优化中文全文模糊索引</span><br><span class="line">ft_min_word_len &#x3D; 1</span><br><span class="line"># 默认库名表名保存为小写, 不区分大小写</span><br><span class="line">lower_case_table_names &#x3D; 1</span><br><span class="line"># 单条记录写入最大的大小限制</span><br><span class="line"># 过小可能会导致写入(导入)数据失败</span><br><span class="line">max_allowed_packet &#x3D; 256M</span><br><span class="line"># 半同步复制开启</span><br><span class="line">rpl_semi_sync_master_enabled &#x3D; 1</span><br><span class="line">rpl_semi_sync_slave_enabled &#x3D; 1</span><br><span class="line"># 半同步复制超时时间设置</span><br><span class="line">rpl_semi_sync_master_timeout &#x3D; 1000</span><br><span class="line"># 复制模式(保持系统默认)</span><br><span class="line">rpl_semi_sync_master_wait_point &#x3D; AFTER_SYNC</span><br><span class="line"># 后端只要有一台收到日志并写入 relaylog 就算成功</span><br><span class="line">rpl_semi_sync_master_wait_slave_count &#x3D; 1</span><br><span class="line"># 多线程复制</span><br><span class="line">slave_parallel_type &#x3D; logical_clock</span><br><span class="line">slave_parallel_workers &#x3D; 4</span><br><span class="line"></span><br><span class="line">### 连接数限制</span><br><span class="line">max_connections &#x3D; 1500</span><br><span class="line"># 验证密码超过20次拒绝连接</span><br><span class="line">max_connect_errors &#x3D; 20</span><br><span class="line"># back_log值指出在mysql暂时停止回答新请求之前的短时间内多少个请求可以被存在堆栈中</span><br><span class="line"># 也就是说，如果MySql的连接数达到max_connections时，新来的请求将会被存在堆栈中</span><br><span class="line"># 以等待某一连接释放资源，该堆栈的数量即back_log，如果等待连接的数量超过back_log</span><br><span class="line"># 将不被授予连接资源</span><br><span class="line">back_log &#x3D; 500</span><br><span class="line">open_files_limit &#x3D; 65535</span><br><span class="line"># 服务器关闭交互式连接前等待活动的秒数</span><br><span class="line">interactive_timeout &#x3D; 3600</span><br><span class="line"># 服务器关闭非交互连接之前等待活动的秒数</span><br><span class="line">wait_timeout &#x3D; 3600</span><br><span class="line"></span><br><span class="line">### 内存分配</span><br><span class="line"># 指定表高速缓存的大小。每当MySQL访问一个表时，如果在表缓冲区中还有空间</span><br><span class="line"># 该表就被打开并放入其中，这样可以更快地访问表内容</span><br><span class="line">table_open_cache &#x3D; 1024</span><br><span class="line"># 为每个session 分配的内存, 在事务过程中用来存储二进制日志的缓存</span><br><span class="line">binlog_cache_size &#x3D; 2M</span><br><span class="line"># 在内存的临时表最大大小</span><br><span class="line">tmp_table_size &#x3D; 128M</span><br><span class="line"># 创建内存表的最大大小(保持系统默认, 不允许创建过大的内存表)</span><br><span class="line"># 如果有需求当做缓存来用, 可以适当调大此值</span><br><span class="line">max_heap_table_size &#x3D; 16M</span><br><span class="line"># 顺序读, 读入缓冲区大小设置</span><br><span class="line"># 全表扫描次数多的话, 可以调大此值</span><br><span class="line">read_buffer_size &#x3D; 1M</span><br><span class="line"># 随机读, 读入缓冲区大小设置</span><br><span class="line">read_rnd_buffer_size &#x3D; 8M</span><br><span class="line"># 高并发的情况下, 需要减小此值到64K-128K</span><br><span class="line">sort_buffer_size &#x3D; 1M</span><br><span class="line"># 每个查询最大的缓存大小是1M, 最大缓存64M 数据</span><br><span class="line">query_cache_size &#x3D; 64M</span><br><span class="line">query_cache_limit &#x3D; 1M</span><br><span class="line"># 提到 join 的效率</span><br><span class="line">join_buffer_size &#x3D; 16M</span><br><span class="line"># 线程连接重复利用</span><br><span class="line">thread_cache_size &#x3D; 64</span><br><span class="line"></span><br><span class="line">### InnoDB 优化</span><br><span class="line">## 内存利用方面的设置</span><br><span class="line"># 数据缓冲区</span><br><span class="line">innodb_buffer_pool_size&#x3D;2G</span><br><span class="line">## 日志方面设置</span><br><span class="line"># 事务日志大小</span><br><span class="line">innodb_log_file_size &#x3D; 256M</span><br><span class="line"># 日志缓冲区大小</span><br><span class="line">innodb_log_buffer_size &#x3D; 4M</span><br><span class="line"># 事务在内存中的缓冲</span><br><span class="line">innodb_log_buffer_size &#x3D; 3M</span><br><span class="line"># 主库保持系统默认, 事务立即写入磁盘, 不会丢失任何一个事务</span><br><span class="line">innodb_flush_log_at_trx_commit &#x3D; 1</span><br><span class="line"># mysql 的数据文件设置, 初始100, 以10M 自动扩展</span><br><span class="line">innodb_data_file_path &#x3D; ibdata1:100M:autoextend</span><br><span class="line"># 为提高性能, MySQL可以以循环方式将日志文件写到多个文件</span><br><span class="line">innodb_log_files_in_group &#x3D; 3</span><br><span class="line">##其他设置</span><br><span class="line"># 如果库里的表特别多的情况，请增加此值</span><br><span class="line">innodb_open_files &#x3D; 800</span><br><span class="line"># 为每个 InnoDB 表分配单独的表空间</span><br><span class="line">innodb_file_per_table &#x3D; 1</span><br><span class="line"># InnoDB 使用后台线程处理数据页上写 I&#x2F;O（输入）请求的数量</span><br><span class="line">innodb_write_io_threads &#x3D; 8</span><br><span class="line"># InnoDB 使用后台线程处理数据页上读 I&#x2F;O（输出）请求的数量</span><br><span class="line">innodb_read_io_threads &#x3D; 8</span><br><span class="line"># 启用单独的线程来回收无用的数据</span><br><span class="line">innodb_purge_threads &#x3D; 1</span><br><span class="line"># 脏数据刷入磁盘(先保持系统默认, swap 过多使用时, 调小此值, 调小后, 与磁盘交互增多, 性能降低)</span><br><span class="line"># innodb_max_dirty_pages_pct &#x3D; 90</span><br><span class="line"># 事务等待获取资源等待的最长时间</span><br><span class="line">innodb_lock_wait_timeout &#x3D; 120</span><br><span class="line"># 开启 InnoDB 严格检查模式, 不警告, 直接报错</span><br><span class="line">innodb_strict_mode&#x3D;1</span><br><span class="line"># 允许列索引最大达到3072</span><br><span class="line"> innodb_large_prefix &#x3D; on</span><br><span class="line"></span><br><span class="line">[mysqldump]</span><br><span class="line"># 开启快速导出</span><br><span class="line">quick</span><br><span class="line">default-character-set &#x3D; utf8mb4</span><br><span class="line">max_allowed_packet &#x3D; 256M</span><br><span class="line"></span><br><span class="line">[mysql]</span><br><span class="line"># 开启 tab 补全</span><br><span class="line">auto-rehash</span><br><span class="line">default-character-set &#x3D; utf8mb4</span><br><span class="line"></span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;blockquote&gt;
&lt;p&gt;MySQL5.7 在 5.6 版本的基础之上做了大量的优化, 本篇文章开篇将重点围绕经过优化的基于 GTID 的多线程复制和半同步复制的特性介绍, 后续会持续增加 MySQL5.7 的调优参数&lt;/p&gt;
&lt;/blockquote&gt;
&lt;figure c</summary>
      
    
    
    
    <category term="Mysql" scheme="https://imszz.com/categories/Mysql/"/>
    
    
    <category term="mysql" scheme="https://imszz.com/tags/mysql/"/>
    
  </entry>
  
  <entry>
    <title>mysql 安装5.7</title>
    <link href="https://imszz.com/p/d47d8d30/"/>
    <id>https://imszz.com/p/d47d8d30/</id>
    <published>2021-01-25T16:00:00.000Z</published>
    <updated>2021-01-26T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<h1 id="MySQL编译和安装"><a href="#MySQL编译和安装" class="headerlink" title="MySQL编译和安装"></a>MySQL编译和安装</h1><p>##在<code>CentOS7</code>中编译安装<code>MySQL 5.7.21</code>. 依赖和源码包 安装相关的依赖: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">yum install gcc gcc-c++ ncurses ncurses-devel cmake bison openssl-devel -y </span><br><span class="line">yum install make cmake gcc gcc-c++ bison bison-devel ncurses ncurses-devel autoconf automake</span><br></pre></td></tr></table></figure><p>下载<code>MySQL 5.7.32</code>源码包和依赖<code>boost</code>, <code>MySQL 5.7.32</code>依赖<code>boost 1.59.0</code>: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">curl -o boost_1_59_0.tar.gz https:&#x2F;&#x2F;jaist.dl.sourceforge.net&#x2F;project&#x2F;boost&#x2F;boost&#x2F;1.59.0&#x2F;boost_1_59_0.tar.gz </span><br><span class="line">#curl -o mysql-5.7.32.tar.gz https:&#x2F;&#x2F;dev.mysql.com&#x2F;get&#x2F;Downloads&#x2F;MySQL-5.7&#x2F;mysql-5.7.32.tar.gz</span><br><span class="line">如果拉取不到使用下方下载地址 ：</span><br><span class="line">https:&#x2F;&#x2F;downloads.mysql.com&#x2F;archives&#x2F;community&#x2F;</span><br></pre></td></tr></table></figure><p>解压下载的包: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># 进入下载的路径 </span><br><span class="line"># 解压到&#x2F;usr&#x2F;local&#x2F;目录</span><br><span class="line">tar -xzvf boost_1_59_0.tar.gz -C &#x2F;usr&#x2F;local&#x2F; </span><br><span class="line"># 解压到当前目录 </span><br><span class="line">tar -xzvf mysql-5.7.32.tar.gz</span><br></pre></td></tr></table></figure><h1 id="创建用户和组"><a href="#创建用户和组" class="headerlink" title="创建用户和组"></a>创建用户和组</h1><p>创建<code>MySQL</code>用户和组, 并且用户不能登陆: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">groupadd -r mysql &amp;&amp; useradd -r -g mysql -s &#x2F;sbin&#x2F;nologin -M mysql </span><br></pre></td></tr></table></figure><h1 id="创建相关的目录"><a href="#创建相关的目录" class="headerlink" title="创建相关的目录"></a>创建相关的目录</h1><h3 id="创建数据目录"><a href="#创建数据目录" class="headerlink" title="创建数据目录"></a>创建数据目录</h3><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">mkdir -p &#x2F;home&#x2F;mysql&#x2F;data</span><br><span class="line"></span><br><span class="line">mkdir -p &#x2F;home&#x2F;mysql&#x2F;logs</span><br><span class="line"></span><br><span class="line">mkdir -p &#x2F;usr&#x2F;local&#x2F;mysql </span><br><span class="line"></span><br><span class="line">mkdir -p &#x2F;home&#x2F;mysql&#x2F;temp</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">chown -Rf mysql:mysql &#x2F;usr&#x2F;local&#x2F;mysql</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">chown -Rf mysql:mysql &#x2F;home&#x2F;mysql</span><br></pre></td></tr></table></figure><h1 id="预编译"><a href="#预编译" class="headerlink" title="预编译"></a>预编译</h1><p>使用各种参数, 预编译源代码. 进入解压的<code>MySQL</code>源码目录, 执行以下命令: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">cmake -DCMAKE_INSTALL_PREFIX&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql -DMYSQL_DATADIR&#x3D;&#x2F;home&#x2F;mysql&#x2F;data -DSYSCONFDIR&#x3D;&#x2F;etc -DMYSQL_UNIX_ADDR&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql&#x2F;mysqld.sock -DEXTRA_CHARSETS&#x3D;all -DDEFAULT_CHARSET&#x3D;utf8mb4 -DDEFAULT_COLLATION&#x3D;utf8mb4_unicode_ci -DWITH_MYISAM_STORAGE_ENGINE&#x3D;1 -DWITH_INNOBASE_STORAGE_ENGINE&#x3D;1 -DWITH_PARTITION_STORAGE_ENGINE&#x3D;1 -DWITH_ARCHIVE_STORAGE_ENGINE&#x3D;1 -DWITH_BLACKHOLE_STORAGE_ENGINE&#x3D;1 -DENABLED_LOCAL_INFILE&#x3D;1 -DENABLED_PROFILING&#x3D;1 -DMYSQL_TCP_PORT&#x3D;3306 -DWITH_DEBUG&#x3D;0 -DDOWNLOAD_BOOST&#x3D;1 -DWITH_BOOST&#x3D;&#x2F;usr&#x2F;local&#x2F;boost_1_59_0</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">DCMAKE_INSTALL_PREFIX&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql :安装路径</span><br><span class="line">DMYSQL_DATADIR&#x3D;&#x2F;data&#x2F;mysql :数据文件存放位置</span><br><span class="line">DSYSCONFDIR&#x3D;&#x2F;etc :my.cnf路径</span><br><span class="line">DMYSQL_UNIX_ADDR&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql&#x2F;mysqld.sock :连接数据库socket路径 </span><br><span class="line">DEXTRA_CHARSETS&#x3D;all :安装所有的字符集</span><br><span class="line">DDEFAULT_CHARSET&#x3D;utf8mb4 :默认字符</span><br><span class="line">DDEFAULT_COLLATION&#x3D;utf8mb4_unicode_ci :排序集</span><br><span class="line">DWITH_MYISAM_STORAGE_ENGINE&#x3D;1 :支持MyIASM引擎</span><br><span class="line">DWITH_INNOBASE_STORAGE_ENGINE&#x3D;1 :支持InnoDB引擎</span><br><span class="line">DWITH_PARTITION_STORAGE_ENGINE&#x3D;1 :安装支持数据库分区</span><br><span class="line">DENABLED_LOCAL_INFILE&#x3D;1 :允许从本地导入数据</span><br><span class="line">DENABLED_PROFILING&#x3D;1 :</span><br><span class="line">DMYSQL_TCP_PORT&#x3D;3306 :端口</span><br><span class="line">DWITH_DEBUG&#x3D;0 :</span><br><span class="line">DDOWNLOAD_BOOST&#x3D;1 :允许下载</span><br><span class="line">DWITH_BOOST&#x3D;&#x2F;usr&#x2F;local&#x2F;boost_1_59_0 :本地boost路径 </span><br></pre></td></tr></table></figure><h1 id="编译安装"><a href="#编译安装" class="headerlink" title="编译安装"></a>编译安装</h1><p>预编译完成后, 执行下面的命令编译, 安装: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># 指定CPU数量编译 </span><br><span class="line">make -j &#96;grep processor &#x2F;proc&#x2F;cpuinfo | wc -l&#96; &amp;&amp; make install</span><br></pre></td></tr></table></figure><h1 id="添加开机自启"><a href="#添加开机自启" class="headerlink" title="添加开机自启"></a>添加开机自启</h1><p>对目录修改权限, 添加<code>service/systemd</code>服务: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">chown -R mysql:mysql &#x2F;usr&#x2F;local&#x2F;mysql </span><br><span class="line">cp &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;support-files&#x2F;mysql.server &#x2F;etc&#x2F;init.d&#x2F;mysql</span><br><span class="line">chmod +x &#x2F;etc&#x2F;init.d&#x2F;mysql</span><br><span class="line"># 开机自启 </span><br><span class="line">chkconfig --add mysql</span><br><span class="line">chkconfig mysql on </span><br></pre></td></tr></table></figure><h1 id="环境变量"><a href="#环境变量" class="headerlink" title="环境变量"></a>环境变量</h1><p>将<code>/usr/local/mysql/bin</code>添加进入<code>环境变量</code>, 或者直接使用<code>软链接</code>的方式链到<code>/usr/local/bin</code>下: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"># 添加到环境变量 </span><br><span class="line">echo &quot;&quot; &gt;&gt; &#x2F;etc&#x2F;bashrc </span><br><span class="line">echo &quot;export PATH&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql&#x2F;bin:$PATH&quot; &gt;&gt; &#x2F;etc&#x2F;bashrc </span><br><span class="line">echo &quot;&quot; &gt;&gt; &#x2F;etc&#x2F;bashrc </span><br><span class="line">source ~&#x2F;.bashrc </span><br><span class="line"></span><br><span class="line"># 使用软链接 </span><br><span class="line">ln -s &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;bin&#x2F;* &#x2F;usr&#x2F;local&#x2F;bin&#x2F;</span><br></pre></td></tr></table></figure><h1 id="初始化数据库"><a href="#初始化数据库" class="headerlink" title="初始化数据库"></a>初始化数据库</h1><p>以上都完成后, 还不能启动MySQL, 如果非要启动, 会报错. 需要初始化数据库:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">&#x2F;usr&#x2F;local&#x2F;mysql&#x2F;bin&#x2F;mysqld --initialize --user&#x3D;mysql --basedir&#x3D;&#x2F;usr&#x2F;local&#x2F;mysql --datadir&#x3D;&#x2F;home&#x2F;mysql&#x2F;data</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">--user :指定用户 </span><br><span class="line">--basedir :mysql所在目录 </span><br><span class="line">--datadir :mysql数据库和表所在的目录,以及PID文件 </span><br></pre></td></tr></table></figure><p>初始化后, 会有一行提示, 冒号后面的是初始密码<code>root@localhost: password</code>:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">A temporary password is generated for root@localhost: xKefZvib13)5 </span><br></pre></td></tr></table></figure><h1 id="启动服务"><a href="#启动服务" class="headerlink" title="启动服务"></a>启动服务</h1><p>以上都配置完成, 就可以启动服务了: </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"># 使用service </span><br><span class="line">service mysql start </span><br><span class="line"></span><br><span class="line"># 使用systemd </span><br><span class="line">systemctl daemon-reload </span><br><span class="line">systemctl start mysql</span><br></pre></td></tr></table></figure><h1 id="修改密码"><a href="#修改密码" class="headerlink" title="修改密码"></a>修改密码</h1><p>将初始密码修改成自己的密码, 直接在<code>shell</code>中输入命令: <code>mysqladmin -uroot -p&#39;old_pass&#39; password &#39;new_pass&#39;</code> </p><h1 id="配置文件"><a href="#配置文件" class="headerlink" title="配置文件"></a>配置文件</h1><p>默认<code>MySQL不</code>需要配置文件, 编译时已经配置好了, 但是也可以使用配置文件, 指定<code>log</code>的位置, 编辑<code>vim /etc/my.cnf</code>, 将以下内容添加到文件中:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br></pre></td><td class="code"><pre><span class="line">[client]</span><br><span class="line"></span><br><span class="line">port &#x3D; 3306</span><br><span class="line"></span><br><span class="line">socket &#x3D; &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;mysql.sock</span><br><span class="line"></span><br><span class="line">default-character-set&#x3D;utf8mb4</span><br><span class="line">[mysql]</span><br><span class="line">default-character-set&#x3D;utf8mb4</span><br><span class="line">[mysqld]</span><br><span class="line"></span><br><span class="line">character-set-client-handshake&#x3D;FALSE</span><br><span class="line"></span><br><span class="line">character-set-server&#x3D;utf8mb4</span><br><span class="line"></span><br><span class="line">collation-server&#x3D;utf8mb4_unicode_ci</span><br><span class="line"></span><br><span class="line">init_connect&#x3D;&#39;SET NAMES utf8mb4&#39;</span><br><span class="line"></span><br><span class="line">#character-set-server &#x3D; utf8</span><br><span class="line"></span><br><span class="line">#collation-server &#x3D; utf8_general_ci</span><br><span class="line"></span><br><span class="line">skip-external-locking</span><br><span class="line"></span><br><span class="line">skip-name-resolve</span><br><span class="line"></span><br><span class="line">user &#x3D; mysql</span><br><span class="line"></span><br><span class="line">port &#x3D; 3306</span><br><span class="line"></span><br><span class="line">basedir &#x3D; &#x2F;usr&#x2F;local&#x2F;mysql</span><br><span class="line"></span><br><span class="line">datadir &#x3D; &#x2F;home&#x2F;mysql&#x2F;data</span><br><span class="line"></span><br><span class="line">tmpdir &#x3D; &#x2F;home&#x2F;mysql&#x2F;temp</span><br><span class="line"></span><br><span class="line"># server_id &#x3D; .....</span><br><span class="line"></span><br><span class="line">socket &#x3D; &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;mysql.sock</span><br><span class="line"></span><br><span class="line">log-error &#x3D; &#x2F;home&#x2F;mysql&#x2F;logs&#x2F;mysql_error.log</span><br><span class="line"></span><br><span class="line">pid-file &#x3D; &#x2F;home&#x2F;mysql&#x2F;mysql.pid</span><br><span class="line"></span><br><span class="line">open_files_limit &#x3D; 10240</span><br><span class="line"></span><br><span class="line">back_log &#x3D; 600</span><br><span class="line"></span><br><span class="line">max_connections&#x3D;500</span><br><span class="line"></span><br><span class="line">max_connect_errors &#x3D; 6000</span><br><span class="line"></span><br><span class="line">wait_timeout&#x3D;605800</span><br><span class="line"></span><br><span class="line">#open_tables &#x3D; 600</span><br><span class="line"></span><br><span class="line">#table_cache &#x3D; 650</span><br><span class="line"></span><br><span class="line">#opened_tables &#x3D; 630</span><br><span class="line"></span><br><span class="line">max_allowed_packet &#x3D; 32M</span><br><span class="line"></span><br><span class="line">sort_buffer_size &#x3D; 4M</span><br><span class="line"></span><br><span class="line">join_buffer_size &#x3D; 4M</span><br><span class="line"></span><br><span class="line">thread_cache_size &#x3D; 300</span><br><span class="line"></span><br><span class="line">query_cache_type &#x3D; 1</span><br><span class="line"></span><br><span class="line">query_cache_size &#x3D; 256M</span><br><span class="line"></span><br><span class="line">query_cache_limit &#x3D; 2M</span><br><span class="line"></span><br><span class="line">query_cache_min_res_unit &#x3D; 16k</span><br><span class="line"></span><br><span class="line">tmp_table_size &#x3D; 256M</span><br><span class="line"></span><br><span class="line">max_heap_table_size &#x3D; 256M</span><br><span class="line"></span><br><span class="line">key_buffer_size &#x3D; 256M</span><br><span class="line"></span><br><span class="line">read_buffer_size &#x3D; 1M</span><br><span class="line"></span><br><span class="line">read_rnd_buffer_size &#x3D; 16M</span><br><span class="line"></span><br><span class="line">bulk_insert_buffer_size &#x3D; 64M</span><br><span class="line"></span><br><span class="line">lower_case_table_names&#x3D;1</span><br><span class="line"></span><br><span class="line">default-storage-engine &#x3D; INNODB</span><br><span class="line"></span><br><span class="line">innodb_buffer_pool_size &#x3D;2G</span><br><span class="line"></span><br><span class="line">innodb_log_buffer_size &#x3D; 32M</span><br><span class="line"></span><br><span class="line">innodb_log_file_size &#x3D; 128M</span><br><span class="line"></span><br><span class="line">innodb_flush_method &#x3D;O_DIRECT</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">#####################</span><br><span class="line"></span><br><span class="line">#thread_concurrency &#x3D; 32 5.7不支持</span><br><span class="line"></span><br><span class="line">long_query_time&#x3D; 2</span><br><span class="line"></span><br><span class="line">slow-query-log&#x3D;on</span><br><span class="line"></span><br><span class="line">slow-query-log-file &#x3D;&#x2F;home&#x2F;mysql&#x2F;logs&#x2F;mysql-slow.log</span><br><span class="line"></span><br><span class="line">[mysqldump]</span><br><span class="line"></span><br><span class="line">quick</span><br><span class="line"></span><br><span class="line">max_allowed_packet &#x3D; 32M</span><br><span class="line"></span><br><span class="line">[mysqld_safe]</span><br><span class="line"></span><br><span class="line">log-error&#x3D;&#x2F;var&#x2F;log&#x2F;mysqld.log</span><br><span class="line"></span><br><span class="line">pid-file&#x3D;&#x2F;var&#x2F;run&#x2F;mysqld&#x2F;mysqld.pid</span><br></pre></td></tr></table></figure>]]></content>
    
    
      
      
    <summary type="html">&lt;h1 id=&quot;MySQL编译和安装&quot;&gt;&lt;a href=&quot;#MySQL编译和安装&quot; class=&quot;headerlink&quot; title=&quot;MySQL编译和安装&quot;&gt;&lt;/a&gt;MySQL编译和安装&lt;/h1&gt;&lt;p&gt;##在&lt;code&gt;CentOS7&lt;/code&gt;中编译安装&lt;code&gt;MySQ</summary>
      
    
    
    
    <category term="Mysql" scheme="https://imszz.com/categories/Mysql/"/>
    
    
    <category term="mysql" scheme="https://imszz.com/tags/mysql/"/>
    
  </entry>
  
  <entry>
    <title>mysql数据更改存储路径</title>
    <link href="https://imszz.com/p/57aae221/"/>
    <id>https://imszz.com/p/57aae221/</id>
    <published>2021-01-25T16:00:00.000Z</published>
    <updated>2021-01-26T12:46:25.000Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>在初次安装mysql 的时候将数据库目录安装在了系统盘。（第一个磁盘）使用了一段时间之后数据库存储量变大，快将20GB的存放空间占满了。因此必须将存放数据空间换地方了。下面是简单的操作。</p></blockquote><h1 id="检查mysql数据库存放目录"><a href="#检查mysql数据库存放目录" class="headerlink" title="检查mysql数据库存放目录"></a>检查mysql数据库存放目录</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mysql -u root -prootadmin</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">#进入数据库</span><br><span class="line"></span><br><span class="line">show variables like &#39;%dir%&#39;;</span><br><span class="line"></span><br><span class="line">#查看sql存储路径</span><br><span class="line"></span><br><span class="line">（查看datadir 那一行所指的路径）</span><br><span class="line"></span><br><span class="line">quit;</span><br></pre></td></tr></table></figure><h1 id="停止mysql服务"><a href="#停止mysql服务" class="headerlink" title="停止mysql服务"></a>停止mysql服务</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">service mysql stop</span><br></pre></td></tr></table></figure><h1 id="创建新的数据库存放目录"><a href="#创建新的数据库存放目录" class="headerlink" title="创建新的数据库存放目录"></a>创建新的数据库存放目录</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">mkdir &#x2F;data&#x2F;mysql</span><br></pre></td></tr></table></figure><h1 id="移动-复制之前存放数据库目录文件，到新的数据库存放目录位置"><a href="#移动-复制之前存放数据库目录文件，到新的数据库存放目录位置" class="headerlink" title="移动/复制之前存放数据库目录文件，到新的数据库存放目录位置"></a>移动/复制之前存放数据库目录文件，到新的数据库存放目录位置</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">cp -R &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;data&#x2F;* &#x2F;data&#x2F;mysql&#x2F; #或mv &#x2F;usr&#x2F;local&#x2F;mysql&#x2F;data&#x2F;* &#x2F;data&#x2F;mysql</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="修改mysql数据库目录权限以及配置文件"><a href="#修改mysql数据库目录权限以及配置文件" class="headerlink" title="修改mysql数据库目录权限以及配置文件"></a>修改mysql数据库目录权限以及配置文件</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">chown mysql:mysql -R &#x2F;data&#x2F;mysql&#x2F;</span><br><span class="line"></span><br><span class="line">vim &#x2F;etc&#x2F;my.cnf</span><br><span class="line"></span><br><span class="line">datadir&#x3D;&#x2F;data&#x2F;mysql （制定为新的数据存放目录）</span><br><span class="line"></span><br><span class="line">vim &#x2F;etc&#x2F;init.d&#x2F;mysql</span><br><span class="line"></span><br><span class="line">datadir&#x3D;&#x2F;data&#x2F;mysql</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="启动数据库服务"><a href="#启动数据库服务" class="headerlink" title="启动数据库服务"></a>启动数据库服务</h1><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">service mysqld start</span><br></pre></td></tr></table></figure><p>说明：根据以上的简单6步操作，已经成功的数据库目录更换路径了。</p><blockquote><p>备注：以上系统为CentOS Linux release 7.8.2003 (Core)  mysql-5.7.32 编译安装</p></blockquote>]]></content>
    
    
      
      
    <summary type="html">&lt;blockquote&gt;
&lt;p&gt;在初次安装mysql 的时候将数据库目录安装在了系统盘。（第一个磁盘）使用了一段时间之后数据库存储量变大，快将20GB的存放空间占满了。因此必须将存放数据空间换地方了。下面是简单的操作。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1 id=&quot;检查my</summary>
      
    
    
    
    <category term="Mysql" scheme="https://imszz.com/categories/Mysql/"/>
    
    
    <category term="mysql" scheme="https://imszz.com/tags/mysql/"/>
    
  </entry>
  
</feed>
