接上回,我們開始對(duì)兩副本丟失進(jìn)行演練。
從表region的分布圖可以看到,當(dāng)宕掉tikv2135、tikv5138兩臺(tái)主機(jī)情況下,整個(gè)集群并不會(huì)受到影響,因?yàn)橹挥幸粋€(gè)region副本分布在這兩臺(tái)機(jī)器之上,但這僅僅是當(dāng)數(shù)據(jù)庫的數(shù)據(jù)兩較小情況,當(dāng)數(shù)據(jù)量增大PD調(diào)度將會(huì)對(duì)region的分布進(jìn)行調(diào)度。對(duì)于掛掉一個(gè)副本的情況,在此不進(jìn)行模擬。采用同時(shí)宕掉Tikv1134和Tikv3136這兩臺(tái)機(jī)器,會(huì)出現(xiàn)region的兩個(gè)副本丟失:
先檢查宕機(jī)前測試表的狀況:
MySQL[sbtest2]> select count(*) from t_user;
+----------+
|count(*) |
+----------+
| 3000000 |
+----------+
1row in set (6.98 sec)
同時(shí)宕掉Tikv3 136和Tikv 4137兩臺(tái)機(jī)器后測試表的情況:
MySQL[sbtest2]> select count(*) from t_user;
ERROR9005 (HY000): Region is unavailable
正常的SQL語句出現(xiàn)region不可用的報(bào)錯(cuò)。
檢查宕機(jī)的兩臺(tái)機(jī)器對(duì)應(yīng)的store_id:
[root@tidb1bin]# /root/tidb-v4.0.0-linux-amd64/bin/pd-ctl -i -uhttp://172.16.134.133:2379
?store
…
{
"store": {
"id": 5,
"address": "172.16.134.136:20160",
"labels": [
{
"key": "host",
"value": "tikv3"
}
],
"version": "4.0.0-rc",
"status_address": "172.16.134.136:20180",
"git_hash": "f45d0c963df3ee4b1011caf5eb146cacd1fbbad8",
"start_timestamp": 1594632461,
"binary_path":"/data1/tidb-deploy/tikv-20160/bin/tikv-server",
"last_heartbeat": 1594700897622993541,
"state_name": "Disconnected"
},…
"{
"store": {
"id": 4,
"address": "172.16.134.134:20160",
"labels": [
{
"key": "host",
"value": "tikv1"
}
],
"version": "4.0.0-rc",
"status_address": "172.16.134.134:20180",
"git_hash": "f45d0c963df3ee4b1011caf5eb146cacd1fbbad8",
"start_timestamp": 1594632462,
"binary_path":"/data1/tidb-deploy/tikv-20160/bin/tikv-server",
"last_heartbeat": 1594700897744383603,
"state_name": "Disconnected"
},
可以發(fā)現(xiàn)storeID 4和5狀態(tài)名為“Disconnected”,一段時(shí)間后狀態(tài)會(huì)成為“DOWN”。
通過 pd-ctlconfig get 獲取region-schedule-limit、replica-schedule-limit、leader-schedule-limit、merge-schedule-limit
[root@tidb1bin]# ./pd-ctl -i -u http://172.16.134.133:2379
?config show
{
"replication": {
"enable-placement-rules": "false",
"location-labels": "host",
"max-replicas": 3,
"strictly-match-label": "false"
},
"schedule": {
"enable-cross-table-merge": "false",
"enable-debug-metrics": "false",
"enable-location-replacement": "true",
"enable-make-up-replica": "true",
"enable-one-way-merge": "false",
"enable-remove-down-replica": "true",
"enable-remove-extra-replica": "true",
"enable-replace-offline-replica": "true",
"high-space-ratio": 0.7,
"hot-region-cache-hits-threshold": 3,
"hot-region-schedule-limit": 4,
"leader-schedule-limit": 4,
"leader-schedule-policy": "count",
"low-space-ratio": 0.8,
"max-merge-region-keys": 200000,
"max-merge-region-size": 20,
"max-pending-peer-count": 16,
"max-snapshot-count": 3,
"max-store-down-time": "30m0s",
"merge-schedule-limit": 8,
"patrol-region-interval": "100ms",
"region-schedule-limit": 2048,
"replica-schedule-limit": 64,
"scheduler-max-waiting-operator": 5,
"split-merge-interval": "1h0m0s",
"store-balance-rate": 15,
"store-limit-mode": "manual",
"tolerant-size-ratio": 0
}
}
通過 pd-ctlconfig set 將這 4個(gè)參數(shù)設(shè)為 0
?config set region-schedule-limit 0
Success!
?config set replica-schedule-limit 0
Success!
?config set leader-schedule-limit 0
Success!
?config set merge-schedule-limit 0
Success!
關(guān)閉調(diào)度主要為將恢復(fù)過程中可能的異常情況降到最少,需在故障處理期間禁用相關(guān)的調(diào)度。
使用pd-ctl 檢查大于等于一半副本數(shù)在故障節(jié)點(diǎn)上的Region,并記錄它們的ID(故障節(jié)點(diǎn)為storeid 4,5):
?region --jq=".regions[] | {id: .id, peer_stores:[.peers[].store_id] | select(length as $total | map(if .==(4,5) then. else empty end) | length>=$total-length) }"
{"id":3080,"peer_stores":[4,6,5]}
{"id":18,"peer_stores":[4,5,6]}
{"id":3084,"peer_stores":[4,6,5]}
{"id":75,"peer_stores":[4,5,6]}
{"id":34,"peer_stores":[6,4,5]}
{"id":4005,"peer_stores":[4,6,5]}
{"id":4009,"peer_stores":[5,6,4]}
{"id":83,"peer_stores":[4,5,6]}
{"id":3076,"peer_stores":[4,5,6]}
{"id":4013,"peer_stores":[5,4,6]}
{"id":10,"peer_stores":[4,6,5]}
{"id":26,"peer_stores":[4,6,5]}
{"id":59,"peer_stores":[4,5,6]}
{"id":3093,"peer_stores":[4,5,6]}
我們可以看到表的兩個(gè)regionID均在列表中,另外的兩個(gè)region由于只丟失一個(gè)副本,并未出現(xiàn)在列表中。
在剩余正常的kv節(jié)點(diǎn)上執(zhí)行停Tikv的操作:
[root@tidb1bin]# tiup cluster stop tidb-test -R=tikv
Startingcomponent `cluster`: /root/.tiup/components/cluster/v0.6.1/clusterstop tidb-test -R=tikv
+[ Serial ] - SSHKeySet:privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa,publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+[Parallel] - UserSSH: user=tidb, host=172.16.134.133
+[Parallel] - UserSSH: user=tidb, host=172.16.134.133
+[Parallel] - UserSSH: user=tidb, host=172.16.134.134
+[Parallel] - UserSSH: user=tidb, host=172.16.134.135
+[Parallel] - UserSSH: user=tidb, host=172.16.134.136
+[Parallel] - UserSSH: user=tidb, host=172.16.134.137
+[Parallel] - UserSSH: user=tidb, host=172.16.134.138
+[Parallel] - UserSSH: user=tidb, host=172.16.134.133
+[Parallel] - UserSSH: user=tidb, host=172.16.134.133
+[Parallel] - UserSSH: user=tidb, host=172.16.134.133
+[ Serial ] - ClusterOperate: operation=StopOperation,options={Roles:[tikv] Nodes:[] Force:false SSHTimeout:5 OptTimeout:60APITimeout:300}
Stoppingcomponent tikv
Stopping instance 172.16.134.138
Stopping instance 172.16.134.134
Stopping instance 172.16.134.135
Stopping instance 172.16.134.136
Stopping instance 172.16.134.137
Stop tikv 172.16.134.135:20160 success
Stop tikv 172.16.134.138:20160 success
Stop tikv 172.16.134.137:20160 success
在所有健康的節(jié)點(diǎn)上執(zhí)行(操作需要確保健康的節(jié)點(diǎn)關(guān)閉了Tikv):
[root@tidb3bin]# ./tikv-ctl --db /data1/tidb-data/tikv-20160/db unsafe-recoverremove-fail-stores -s 4,5 --all-regions
removingstores [4, 5] from configurations...
success
[root@tidb5bin]# ./tikv-ctl --db /data1/tidb-data/tikv-20160/db unsafe-recoverremove-fail-stores -s 4,5 --all-regions
removingstores [4, 5] from configurations...
success
[root@tidb6bin]# ./tikv-ctl --db /data1/tidb-data/tikv-20160/db unsafe-recoverremove-fail-stores -s 4,5 --all-regions
removingstores [4, 5] from configurations...
success
當(dāng)然Region比較少,則可以在給定Region 的剩余副本上,移除掉所有位于故障節(jié)點(diǎn)上的Peer,在這些Region 的未發(fā)生掉電故障的機(jī)器上運(yùn)行:
tikv-ctl--db /path/to/tikv-data/db unsafe-recover remove-fail-stores -s
停止PD節(jié)點(diǎn):
[root@tidb1~]# tiup cluster stop tidb-test -R=pd
Startingcomponent `cluster`: /root/.tiup/component
重啟啟動(dòng)PDtikv節(jié)點(diǎn):
[root@tidb1~]# tiup cluster start tidb-test -R=pd,tikv
這里需要啟動(dòng)PD才能連接到數(shù)據(jù)庫。
檢查沒有處于leader狀態(tài)的region(要保持沒有):
[root@tidb1~]# pd-ctl -i -u http://172.16.134.133:2379
?region --jq .regions[]|select(has("leader")|not)|{id:.id,peer_stores: [.peers[].store_id]}
?
這里沒有發(fā)現(xiàn)沒有leader狀態(tài)的region。
重新修改參數(shù):
[root@tidb1~]# pd-ctl -i -u http://172.16.134.133:2379
?config set region-schedule-limit 2048
Success!
?config set replica-schedule-limit 64
Success!
?config set leader-schedule-limit 4
Success!
?config set merge-schedule-limit 8
Success!
檢查查詢數(shù)據(jù)是否正常
MySQL[sbtest2]> select count(*) from t_user;
+----------+
|count(*) |
+----------+
| 3000000 |
+----------+
1row in set (9.95 sec)
至此恢復(fù)操作結(jié)束。
我們?cè)倏纯磖egion的分布:
Region的副本進(jìn)行了新的復(fù)制和分布。
完成了兩副本的丟失的演練,三副本丟失會(huì)出現(xiàn)什么情況又該如何恢復(fù)?咱下回見。
參考文檔https://book.tidb.io/session3/chapter5/recover-quorum.html
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/130199.html
摘要:為云計(jì)算災(zāi)難做好準(zhǔn)備要為云計(jì)算災(zāi)難做好準(zhǔn)備,企業(yè)需要不斷測試其數(shù)據(jù)恢復(fù)框架。與內(nèi)部部署的災(zāi)難恢復(fù)相比,云計(jì)算災(zāi)難恢復(fù)更加簡單。云計(jì)算災(zāi)難恢復(fù)的最佳實(shí)踐選擇合適的災(zāi)難恢復(fù)計(jì)劃方法要制定合適的災(zāi)難恢復(fù)計(jì)劃,企業(yè)了解其基礎(chǔ)設(shè)施非常重要。考慮到當(dāng)今商業(yè)環(huán)境中采用的云計(jì)算技術(shù)迅速增加,從導(dǎo)致服務(wù)中斷和停機(jī)的災(zāi)難中有效恢復(fù)的能力變得更加重要?;谠朴?jì)算的災(zāi)難恢復(fù)可以確保企業(yè)在盡可能短的時(shí)間內(nèi)恢復(fù)其數(shù)據(jù)和...
摘要:基于云遷移的三個(gè)階段細(xì)分為八個(gè)主要步驟,評(píng)估階段主要包括項(xiàng)目啟動(dòng)現(xiàn)狀梳理以及應(yīng)用系統(tǒng)關(guān)聯(lián)關(guān)系分析三個(gè)步驟,設(shè)計(jì)階段包括云架構(gòu)優(yōu)化設(shè)計(jì)和云遷移方案設(shè)計(jì),實(shí)施階段包括目標(biāo)架構(gòu)遷移演練及實(shí)施和試運(yùn)行三個(gè)步驟。 在云計(jì)算市場規(guī)模不斷擴(kuò)大的大背景下,云遷移的需求越來越大且面臨挑戰(zhàn)。云遷移不是一個(gè)遷移軟件工具,而是一種服務(wù)。前IBM資深架構(gòu)師姜亞杰從云遷移的三個(gè)階段、四個(gè)維度到八個(gè)步驟的方法,簡述...
摘要:物聯(lián)網(wǎng)也影響著數(shù)據(jù)中心的安全性,主要是隨著資源和數(shù)據(jù)數(shù)量和質(zhì)量的增長,人們?cè)黾恿藢?duì)數(shù)據(jù)中心安全性的需求。新的物聯(lián)網(wǎng)設(shè)備是和執(zhí)行數(shù)據(jù)分析的其他系統(tǒng)的常見補(bǔ)充,這些設(shè)備會(huì)導(dǎo)致網(wǎng)絡(luò)使用和需求增加。網(wǎng)絡(luò)威脅對(duì)于數(shù)據(jù)中心來說是一個(gè)不幸的現(xiàn)實(shí),這些數(shù)據(jù)中心在防止違規(guī)事件方面面臨許多挑戰(zhàn)。近年來,這種風(fēng)險(xiǎn)一直在增加,超過40%的受訪者在Carbonite公司進(jìn)行的調(diào)查報(bào)告中表示,所面臨的黑客、勒索軟件和其...
摘要:日前,廣東華興銀行總行與科華恒盛就總行災(zāi)備數(shù)據(jù)中心規(guī)劃建設(shè)展開深入合作。項(xiàng)目建成后將全面提升廣東華興銀行數(shù)據(jù)安全保障及運(yùn)維服務(wù)水平,為其總行全球業(yè)務(wù)提供小時(shí)不間斷的同城災(zāi)備服務(wù),為銀行業(yè)務(wù)穩(wěn)定運(yùn)行實(shí)現(xiàn)高速增長奠定牢固的信息化基礎(chǔ)。隨著云計(jì)算、大數(shù)據(jù)等新ICT技術(shù)的高速發(fā)展,銀行業(yè)信息化建設(shè)的步伐愈行愈快。日前,廣東華興銀行總行與科華恒盛就總行災(zāi)備數(shù)據(jù)中心規(guī)劃建設(shè)展開深入合作??迫A恒盛將為其提...
摘要:在全世界的聚焦之下,為年倫敦奧運(yùn)會(huì)運(yùn)行基礎(chǔ)設(shè)施的團(tuán)隊(duì)將更多重點(diǎn)放在了可靠性上,而不會(huì)展示尖端技術(shù)。這意味著熱門技術(shù)例如云計(jì)算將不會(huì)成為奧運(yùn)會(huì)基礎(chǔ)設(shè)施的核心部分。表示,每屆奧運(yùn)會(huì)相隔四年,這使確?;A(chǔ)設(shè)施保持狀況成為非常棘手的事情。 在全世界的聚焦之下,為2012年倫敦奧運(yùn)會(huì)運(yùn)行IT基礎(chǔ)設(shè)施的團(tuán)隊(duì)將更多重點(diǎn)放在了可靠性上,而不會(huì)展示尖端技術(shù)。? 這意味著熱門技術(shù)(例如云計(jì)算)將不會(huì)成為奧運(yùn)會(huì)I...
閱讀 1360·2023-01-11 13:20
閱讀 1709·2023-01-11 13:20
閱讀 1215·2023-01-11 13:20
閱讀 1912·2023-01-11 13:20
閱讀 4167·2023-01-11 13:20
閱讀 2763·2023-01-11 13:20
閱讀 1403·2023-01-11 13:20
閱讀 3675·2023-01-11 13:20