成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專欄INFORMATION COLUMN

oracle 網(wǎng)絡(luò)問題引起的集群驅(qū)逐故障處理

IT那活兒 / 1948人閱讀
oracle 網(wǎng)絡(luò)問題引起的集群驅(qū)逐故障處理
點(diǎn)擊上方“IT那活兒”公眾號(hào),關(guān)注后了解更多內(nèi)容,不管IT什么活兒,干就完了?。。?/strong>

  

某客戶數(shù)據(jù)庫RAC架構(gòu),未做業(yè)務(wù)拆分,私網(wǎng)流量過高導(dǎo)致RAC其中一個(gè)節(jié)點(diǎn)宕機(jī),并且該故障節(jié)點(diǎn)集群無法啟動(dòng)。

接手后進(jìn)行分析,集群無法啟動(dòng)是由于haip無法啟動(dòng)引起的,進(jìn)行traceroute有50%丟包率,但ping正常,暫時(shí)排除硬件問題,查詢資料優(yōu)化網(wǎng)絡(luò)相關(guān)參數(shù)后,成功拉起集群。

環(huán)境:

  • os:redhat7
  • DB:oracle 12.2 RAC noncdb,未部署osw
1. xxxxx1節(jié)點(diǎn)因?yàn)樗骄W(wǎng)通信異常導(dǎo)致宕機(jī)
2021-12-13T16:12:32.211473+08:00
LMON (ospid: 170442) drops the IMR request from LMSK (ospid: 170520) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.211526+08:00
Please check USER trace file for more detail.
2021-12-13T16:12:32.211809+08:00
LMON (ospid: 170442) drops the IMR request from LMS6 (ospid: 170465) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.212013+08:00
USER (ospid: 170500) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.212419+08:00
LMON (ospid: 170442) drops the IMR request from LMSF (ospid: 170500) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.214587+08:00
USER (ospid: 170539) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.214929+08:00
LMON (ospid: 170442) drops the IMR request from LMSP (ospid: 170539) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.215318+08:00
USER (ospid: 170456) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.215603+08:00
LMON (ospid: 170442) drops the IMR request from LMS4 (ospid: 170456) because IMR is in progress and inst 2 is marked bad.
Detected an inconsistent instance membership by instance 2
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc (incident=819377):
ORA-29740: evicted by instance number 2, group incarnation 6
Incident details in: /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/incident/incdir_819377/xxxxx1_lmon_170442_i819377.trc
2021-12-13T16:12:33.213098+08:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2021-12-13T16:12:33.213205+08:00
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc:
ORA-29740: evicted by instance number 2, group incarnation 6
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc (incident=819378):
ORA-29740 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/incident/incdir_819378/xxxxx1_lmon_170442_i819378.trc
2021-12-13T16:12:33.423825+08:00
USER (ospid: 330352): terminating the instance due to error 481
2021-12-13T16:12:44.602060+08:00
Instance terminated by USER, pid = 330352
2021-12-14T00:02:47.101462+08:00
Starting ORACLE instance (normal) (OS id: 417848)
2021-12-14T00:02:47.109132+08:00
CLI notifier numLatches:131 maxDescs:21296
2. 然后集群狀態(tài)也出現(xiàn)異常
2021-12-13 16:12:33.945 [ORAAGENT(170290)]CRS-5011: Check of 
resource "xxxxx" failed: details at "(:CLSN00007:)" in 
"/u01/app/grid/diag/crs/xxxxx01/crs/trace/crsd_oraagent_orac
le.trc"

2021-12-13 16:16:43.717 [ORAROOTAGENT(5870)]CRS-5818:
Aborted command check for resource ora.crsd. Details at 
(:CRSAGF00113:) {0:5:3} in 
/u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_
root.trc.
3. 重啟集群,但集群?jiǎn)?dòng)失敗,haip無法啟動(dòng)
alert.log:
2021-12-13 20:18:59.139 [OHASD(188988)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 188988
2021-12-13 20:18:59.141 [OHASD(188988)]CRS-0714: Oracle Clusterware Release 12.2.0.1.0.
2021-12-13 20:18:59.154 [OHASD(188988)]CRS-2112: The OLR service started on node xxxxx01.
2021-12-13 20:18:59.162 [OHASD(188988)]CRS-8017: location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2021-12-13 20:18:59.162 [OHASD(188988)]CRS-1301: Oracle High Availability Service started on node xxxxx01.
2021-12-13 20:18:59.288 [ORAAGENT(189092)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 189092
2021-12-13 20:18:59.310 [CSSDAGENT(189114)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 189114
2021-12-13 20:18:59.317 [CSSDMONITOR(189121)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 189121
2021-12-13 20:18:59.322 [ORAROOTAGENT(189103)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 189103
2021-12-13 20:18:59.556 [ORAAGENT(189163)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 189163
2021-12-13 20:18:59.602 [MDNSD(189183)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 189183
2021-12-13 20:18:59.605 [EVMD(189184)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 189184
2021-12-13 20:19:00.641 [GPNPD(189222)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 189222
2021-12-13 20:19:01.638 [GPNPD(189222)]CRS-2328: GPNPD started on node xxxxx01.
2021-12-13 20:19:01.654 [GIPCD(189284)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 189284
2021-12-13 20:19:15.462 [CSSDMONITOR(189500)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 189500
2021-12-13 20:19:15.633 [CSSDAGENT(189591)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 189591
2021-12-13 20:19:16.805 [OCSSD(189606)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 189606
2021-12-13 20:19:17.834 [OCSSD(189606)]CRS-1713: CSSD daemon is started in hub mode
2021-12-13 20:19:18.936 [OCSSD(189606)]CRS-1707: Lease acquisition for node xxxxx01 number 1 completed
2021-12-13 20:19:20.025 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerp; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:19:20.029 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerq; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:19:20.033 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerr; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:23:59.366 [ORAROOTAGENT(189103)]CRS-5818: Aborted command check for resource ora.storage. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_root.trc.
2021-12-13 20:25:12.427 [ORAROOTAGENT(195387)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 195387
2021-12-13 20:29:12.450 [ORAROOTAGENT(195387)]CRS-5818: Aborted command
check for resource ora.storage. Details at (:CRSAGF00113:) {0:8:2} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_root.trc.
2021-12-13 20:29:15.772 [CSSDAGENT(189591)]CRS-5818: Aborted command
start for resource ora.cssd. Details at (:CRSAGF00113:) {0:5:3} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_cssdagent_root.trc.
2021-12-13 20:29:16.065 [OHASD(188988)]CRS-2757: Command
Start timed out waiting for response from the resource ora.cssd. Details at (:CRSPE00221:) {0:5:3} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd.trc.
2021-12-13 20:29:16.772 [OCSSD(189606)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc
2021-12-13 20:29:16.773 [OCSSD(189606)]CRS-1603: CSSD on node xxxxx01 has been shut down.
2021-12-13 20:29:21.773 [OCSSD(189606)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 189606 experienced fatal signal or exception code 6.
2021-12-13T20:29:21.777920+08:00
Errors in file /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc (incident=1):
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/grid/diag/crs/xxxxx01/crs/incident/incdir_1/ocssd_i1.trc
###################################################
ocssd.log:
2021-12-13 20:19:51.063 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536816, LATS 3884953830, lastSeqNo 128536813, uniqueness 1565321051, timestamp 1607861990/3882768200
2021-12-13 20:19:51.063 : CSSD:1530885888: clssscSelect: gipcwait returned with status gipcretPosted (17)
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c28038cf0 type gipchaClientReqTypePublish (1)
2021-12-13 20:19:51.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:51.064 :GIPCGMOD:3376412416: gipcmodGipcCallbackEndpClosed: [gipc] Endpoint close for endp 0x7f4c280337d0 [00000000000004b8] { gipcEndpoint : localAddr
(dying), remoteAddr (dying), numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x2, pidPeer 0, readyRef 0x1cdefd0, ready 1, wobj 0x7f4c28035d60, sendp (nil) status 13flags 0x2e0b860a, flags-2 0x0, usrFlags 0x0 }
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c70097550 type gipchaClientReqTypeDeleteName (12)
2021-12-13 20:19:51.064 : CSSD:1530885888: clssscConnect: endp 0x83e - cookie 0x1d013e0 - addr gipcha://xxxxx02:nm2_xxxxx-cluster
2021-12-13 20:19:51.064 : CSSD:1530885888: clssnmRetryConnections: Probing node xxxxx02 (2), probendp(0x83e)
2021-12-13 20:19:51.064 :GIPCHTHR:3376412416: gipchaWorkerProcessClientConnect: starting resolve from connect for host:xxxxx02, port:nm2_xxxxx-cluster, cookie:0x7f4c28038ed0
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c7009a2e0 type gipchaClientReqTypeResolve (4)
2021-12-13 20:19:51.064 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536817, LATS 3884953830, lastSeqNo 128536814, uniqueness 1565321051, timestamp 1607861990/3882768350
2021-12-13 20:19:51.899 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:52.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:52.064 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536819, LATS 3884954830, lastSeqNo 128536816, uniqueness 1565321051, timestamp 1607861991/3882769200
2021-12-13 20:19:52.065 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536820, LATS 3884954830, lastSeqNo 128536817, uniqueness 1565321051, timestamp 1607861991/3882769360
2021-12-13 20:19:52.900 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:53.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:53.066 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536822, LATS 3884955830, lastSeqNo 128536819, uniqueness 1565321051, timestamp 1607861992/3882770200
2021-12-13 20:19:53.068 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536823, LATS 3884955830, lastSeqNo 128536820, uniqueness 1565321051, timestamp 1607861992/3882770360
2021-12-13 20:19:53.902 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:54.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:54.067 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536825, LATS 3884956830, lastSeqNo 128536822, uniqueness 1565321051, timestamp 1607861993/3882771200
4. 對(duì)私網(wǎng)進(jìn)行traceroute,有丟包現(xiàn)象但ping正常
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.112 ms  0.212 ms  0.206 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.113 ms  0.216 ms *
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.121 ms  0.087 ms  0.197 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 * xxxxx02-priv (xxx.xx.11.37) 0.058 ms *
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.217 ms  0.188 ms  0.187 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 * * *
2 xxxxx02-priv (xxx.xx.11.37) 0.068 ms * *
[root@xxxxx01 ~]#
traceroute失敗率在50%左右,初步懷疑私網(wǎng)網(wǎng)絡(luò)存在問題數(shù)據(jù)庫宕機(jī)也是由于私網(wǎng)網(wǎng)絡(luò)通信異常導(dǎo)致的。
但長(zhǎng)ping2節(jié)點(diǎn)私網(wǎng)IP,未發(fā)現(xiàn)丟包現(xiàn)象。網(wǎng)絡(luò)工程師通過ping 50M大包才會(huì)出現(xiàn)丟包現(xiàn)象。暫時(shí)排除硬件問題。
5. 檢查網(wǎng)絡(luò)相關(guān)參數(shù),都是系統(tǒng)推薦參數(shù),繼續(xù)查詢mos,一篇文章給予了啟發(fā)IPC Send timeout/node eviction etc with high packet reassembles failure (文檔 ID 2008933.1)。懷疑我們當(dāng)前主機(jī)數(shù)據(jù)包重組失敗率過高。
開始對(duì)主機(jī)網(wǎng)絡(luò)參數(shù)進(jìn)行優(yōu)化。
修改/etc/sysctl.conf文件里面的網(wǎng)絡(luò)參數(shù),調(diào)整如下:
net.ipv4.ipfrag_high_thresh = 16194304
net.ipv4.ipfrag_low_thresh = 15145728
net.core.rmem_max = 16777216
net.core.rmem_default = 4777216
net.core.wmem_max = 16777216
net.core.wmem_default = 4777216
參數(shù)解釋:
  • --net.ipv4.ipfrag_low_thresh,net.ipv4.ipfrag_high_thresh

    系統(tǒng)中當(dāng)數(shù)據(jù)包傳輸發(fā)生錯(cuò)誤,會(huì)進(jìn)行碎片整理,有效的數(shù)據(jù)包被保留,而無效的數(shù)據(jù)包被丟棄,ipfrag參數(shù)指定了碎片整理時(shí)的最大/最小內(nèi)存。

  • --net.core.rmem_*
    net.core.rmem_default默認(rèn)數(shù)據(jù)接收窗口大小。
    net.core.rmem_max最大數(shù)據(jù)接收窗口大小。
    net.core.wmem_default默認(rèn)數(shù)據(jù)發(fā)送窗口大小。
    net.core.wmem_max最大數(shù)據(jù)發(fā)送窗口大小。
以上兩個(gè)參數(shù)調(diào)大后,相應(yīng)的后面的4個(gè)參數(shù)也調(diào)大了。
執(zhí)行sysctl -p重新啟動(dòng)集群,啟動(dòng)成功。
6. 基于該故障引申出來的疑問
后來我在其他安裝了RAC的linux機(jī)器上,traceroute私網(wǎng),丟包率全部在50%左右,而在aix卻沒有任何丟包,查詢相關(guān)資料,發(fā)現(xiàn)也有很多人有類似疑問,我覺得最正確的答案是一篇文章上說的linux上默認(rèn)有ICMP速率限制,移除后可以解決。
很多采用了默認(rèn)網(wǎng)絡(luò)參數(shù)的數(shù)據(jù)庫并沒有出問題,可能是網(wǎng)絡(luò)集成的時(shí)候mtu已經(jīng)調(diào)大了很多倍(我維護(hù)的很多庫就調(diào)大),可能是數(shù)據(jù)包重組率不高。

本文作者:湯 杰(上海新炬王翦團(tuán)隊(duì))

本文來源:“IT那活兒”公眾號(hào)

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/129348.html

相關(guān)文章

  • DBASK問答集萃第四期

    摘要:?jiǎn)栴}九庫控制文件擴(kuò)展報(bào)錯(cuò)庫的擴(kuò)展報(bào)錯(cuò),用的是裸設(shè)備,和還是原來大小,主庫的沒有報(bào)錯(cuò),并且大小沒有變,求解釋。專家解答從報(bào)錯(cuò)可以看出,控制文件從個(gè)塊擴(kuò)展到個(gè)塊時(shí)報(bào)錯(cuò),而裸設(shè)備最大只支持個(gè)塊,無法擴(kuò)展,可以嘗試將參數(shù)改小,避免控制文件報(bào)錯(cuò)。 鏈接描述引言 近期我們?cè)贒BASK小程序新關(guān)聯(lián)了運(yùn)維之美、高端存儲(chǔ)知識(shí)、一森咖記、運(yùn)維咖啡吧等數(shù)據(jù)領(lǐng)域的公眾號(hào),歡迎大家閱讀分享。 問答集萃 接下來,...

    SKYZACK 評(píng)論0 收藏0
  • ElasticSearch 單個(gè)節(jié)點(diǎn)監(jiān)控

    摘要:會(huì)展示這個(gè)節(jié)點(diǎn)目前正在服務(wù)中的段的數(shù)量。線程池部分在內(nèi)部維護(hù)了線程池。這些線程池相互協(xié)作完成任務(wù),有必要的話相互間還會(huì)傳遞任務(wù)。每個(gè)線程池會(huì)列出已配置的線程數(shù)量,當(dāng)前在處理任務(wù)的線程數(shù)量,以及在隊(duì)列中等待處理的任務(wù)單元數(shù)量。 showImg(https://segmentfault.com/img/remote/1460000011618283?w=1920&h=1080); 集群健康...

    ky0ncheng 評(píng)論0 收藏0
  • 京東如何打造K8s全球最大集群支撐萬億電商交易

    摘要:月日京東基礎(chǔ)架構(gòu)部技術(shù)總監(jiān)集群技術(shù)部負(fù)責(zé)人鮑永成受邀出席了舉辦的容器技術(shù)大會(huì),并做了題為京東如何打造全球最大集群支撐萬億電商交易的主題演講,本文根據(jù)演講內(nèi)容整理而成?;睘楹?jiǎn)重構(gòu)有人問,京東做一個(gè)這么大的集群,是不是特別復(fù)雜特別容易出錯(cuò)。 在過去一年里,Kubernetes以其架構(gòu)簡(jiǎn)潔性和靈活性,流行度持續(xù)快速上升,我們有理由相信在不遠(yuǎn)的未來,Kubernetes將成為通用的基礎(chǔ)設(shè)施標(biāo)...

    young.li 評(píng)論0 收藏0
  • ActiveMQ集群整體認(rèn)識(shí)

    摘要:二集群部署方式集群的部署方式主要有下面種模式實(shí)現(xiàn)負(fù)載均衡,多個(gè)之間同步消息,已達(dá)到服務(wù)器負(fù)載的可能。默認(rèn)為,單位為毫秒,表示一次嘗試重連之間等待的時(shí)間。如果宕機(jī),集群退化成標(biāo)準(zhǔn)集群,只是了失去負(fù)載均衡能力。 前言 最終需要掌握 Replicated LevelDB Store部署方式,這種部署方式是基于ZooKeeper的。 集群分為兩種方式:1.偽集群:集群節(jié)點(diǎn)都搭在一臺(tái)機(jī)器上2....

    sixgo 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<