上一節我們挖了個坑,還剩下 poolUpdater
還沒講,我們這期接著講一下這個點,我們同樣也是從初始化方法開始:
public void init() {
if (inited) {
return;
}
synchronized (this) {
if (inited) {
return;
}
if (intervalSeconds < 10) {
LOG.warn("CAUTION: Purge interval has been set to " + intervalSeconds
+ ". This value should NOT be too small.");
}
if (intervalSeconds <= 0) {
intervalSeconds = DEFAULT_INTERVAL;
}
executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
LOG.debug("Purging the DataSource Pool every " + intervalSeconds + "s.");
try {
removeDataSources();
} catch (Exception e) {
LOG.error("Exception occurred while removing DataSources.", e);
}
}
}, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);
}
}
這里邏輯很簡單,顯示檢查了 intervalSeconds
這個參數,是否符合預期,假如不是改為默認值,然后啟動一個定時任務,這個任務會在 intervalSeconds
時間間隔里面調動 removeDataSources
方法,進行檢查線程的存活情況,我們接下來看一下 removeDataSources
方法:
public void removeDataSources() {
if (nodesToDel == null || nodesToDel.isEmpty()) {
return;
}
try {
lock.lock();
Map<String, DataSource> map = highAvailableDataSource.getDataSourceMap();
Set<String> copySet = new HashSet<String>(nodesToDel);
for (String nodeName : copySet) {
LOG.info("Start removing Node " + nodeName + ".");
if (!map.containsKey(nodeName)) {
LOG.info("Node " + nodeName + " is NOT existed in the map.");
cancelBlacklistNode(nodeName);
continue;
}
DataSource ds = map.get(nodeName);
if (ds instanceof DruidDataSource) {
DruidDataSource dds = (DruidDataSource) ds;
int activeCount = dds.getActiveCount(); // CAUTION, activeCount MAYBE changed!
if (activeCount > 0) {
LOG.warn("Node " + nodeName + " is still running [activeCount=" + activeCount
+ "], try next time.");
continue;
} else {
LOG.info("Close Node " + nodeName + " and remove it.");
try {
dds.close();
} catch (Exception e) {
LOG.error("Exception occurred while closing Node " + nodeName
+ ", just remove it.", e);
}
}
}
map.remove(nodeName); // Remove the node directly if it is NOT a DruidDataSource.
cancelBlacklistNode(nodeName);
}
} catch (Exception e) {
LOG.error("Exception occurred while removing DataSources.", e);
} finally {
lock.unlock();
}
}
我們可以看到,他主要的邏輯就是遍歷 nodesToDel
列表,調用 DruidDataSource
的 getActiveCount
方法獲取活動的連接數量,假如數量為 0 ,就調用其 close
方法。除了 poolUpdater
外我們還漏了個方法,就是 createNodeMap
,我們接下來看一下這個方法:
private void createNodeMap() {
if (nodeListener == null) {
// Compatiable with the old version.
// Create a FileNodeListener to watch the dataSourceFile.
FileNodeListener listener = new FileNodeListener();
listener.setFile(dataSourceFile);
listener.setPrefix(propertyPrefix);
nodeListener = listener;
}
nodeListener.setObserver(poolUpdater);
nodeListener.init();
nodeListener.update(); // Do update in the current Thread at the startup
}
這里主要是設置 HighAvailableDataSource
的 nodeListener
,我們先看一下默認的 Lisenter FileNodeListener
,其主要就是設置一下dataSourceFile
, 然后是 setObserver
方法,我們可以看到,這是一個觀察者模式, nodeListener
是一個被觀察者,poolUpdater
是觀察者,我們可以看一下 nodeListener
的 update 方法。
public void update(List<NodeEvent> events) {
if (events != null && !events.isEmpty()) {
this.lastUpdateTime = new Date();
NodeEvent[] arr = new NodeEvent[events.size()];
for (int i = 0; i < events.size(); i++) {
arr[i] = events.get(i);
}
this.setChanged();
this.notifyObservers(arr);
}
}
我們著重看一下最后兩行,其實就是通過調用 notifyObservers
方法會通知所有的觀察者時間變話,并通過 events
傳遞給觀察者,我們再看一下觀察者的 update 方法:
/**
* Process the given NodeEvent[]. Maybe add / delete some nodes.
*/
@Override
public void update(Observable o, Object arg) {
if (!(o instanceof NodeListener)) {
return;
}
if (arg == null || !(arg instanceof NodeEvent[])) {
return;
}
NodeEvent[] events = (NodeEvent[]) arg;
if (events.length <= 0) {
return;
}
try {
LOG.info("Waiting for Lock to start processing NodeEvents.");
lock.lock();
LOG.info("Start processing the NodeEvent[" + events.length + "].");
for (NodeEvent e : events) {
if (e.getType() == NodeEventTypeEnum.ADD) {
addNode(e);
} else if (e.getType() == NodeEventTypeEnum.DELETE) {
deleteNode(e);
}
}
} catch (Exception e) {
LOG.error("Exception occurred while updating Pool.", e);
} finally {
lock.unlock();
}
}
這里會拿到剛才傳過來的 events
,然后解析這些 Event
, 然后更新 HighAvailableDataSource
的 DataSourceMa
。 接下來我們看一下默認的 nodeListener
的實現, FileNodeListener
, 我們先看一下他的 init 方法:
@Override
public void init() {
super.init();
if (intervalSeconds <= 0) {
intervalSeconds = 60;
}
executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
LOG.debug("Checking file " + file + " every " + intervalSeconds + "s.");
if (!lock.tryLock()) {
LOG.info("Can not acquire the lock, skip this time.");
return;
}
try {
update();
} catch (Exception e) {
LOG.error("Can NOT update the node list.", e);
} finally {
lock.unlock();
}
}
}, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);
}
這里和 poolUpdater
比較接近,這里會啟動一個定時任務,不斷地調用update 方法,update 方法,會先調用 refresh 方法,生成 events ,然后再調用剛才的 update 方法,去通知觀察者。我們先看一下 refresh 的邏輯:
@Override
public List<NodeEvent> refresh() {
Properties originalProperties = PropertiesUtils.loadProperties(file);
List<String> nameList = PropertiesUtils.loadNameList(originalProperties, getPrefix());
Properties properties = new Properties();
for (String n : nameList) {
String url = originalProperties.getProperty(n + ".url");
String username = originalProperties.getProperty(n + ".username");
String password = originalProperties.getProperty(n + ".password");
if (url == null || url.isEmpty()) {
LOG.warn(n + ".url is EMPTY! IGNORE!");
continue;
} else {
properties.setProperty(n + ".url", url);
}
if (username == null || username.isEmpty()) {
LOG.debug(n + ".username is EMPTY. Maybe you should check the config.");
} else {
properties.setProperty(n + ".username", username);
}
if (password == null || password.isEmpty()) {
LOG.debug(n + ".password is EMPTY. Maybe you should check the config.");
} else {
properties.setProperty(n + ".password", password);
}
}
List<NodeEvent> events = NodeEvent.getEventsByDiffProperties(getProperties(), properties);
if (events != null && !events.isEmpty()) {
LOG.info(events.size() + " different(s) detected.");
setProperties(properties);
}
return events;
}
這里先去獲取所有收據有的nameList
,具體的獲取邏輯如下,先解析所有初始化的Properties
中,包含 url
的屬性,然后截取前的名字,如下我們能獲取到 aaa 和 bbb 這個 nameList
。
aaa.url=***
bbb.url=***
接下來是遍歷所有的 nameList
,解析出所有 name 對應的配置信息,然后放到新的 properties
中,最后調用 NodeEvent.getEventsByDiffProperties
方法來生成 Events
。類似我們也可以看一下 ZookeeperNodeListener
,我們先看一下他的 init
方法。
@Override
public void init() {
checkParameters();
super.init();
if (client == null) {
client = CuratorFrameworkFactory.builder()
.canBeReadOnly(true)
.connectionTimeoutMs(5000)
.connectString(zkConnectString)
.retryPolicy(new RetryForever(10000))
.sessionTimeoutMs(30000)
.build();
client.start();
privateZkClient = true;
}
cache = new PathChildrenCache(client, path, true);
cache.getListenable().addListener(new PathChildrenCacheListener() {
@Override
public void childEvent(CuratorFramework client, PathChildrenCacheEvent event) throws Exception {
try {
LOG.info("Receive an event: " + event.getType());
lock.lock();
PathChildrenCacheEvent.Type eventType = event.getType();
switch (eventType) {
case CHILD_REMOVED:
updateSingleNode(event, NodeEventTypeEnum.DELETE);
break;
case CHILD_ADDED:
updateSingleNode(event, NodeEventTypeEnum.ADD);
break;
case CONNECTION_RECONNECTED:
refreshAllNodes();
break;
default:
// CHILD_UPDATED
// INITIALIZED
// CONNECTION_LOST
// CONNECTION_SUSPENDED
LOG.info("Received a PathChildrenCacheEvent, IGNORE it: " + event);
}
} finally {
lock.unlock();
LOG.info("Finish the processing of event: " + event.getType());
}
}
});
try {
// Use BUILD_INITIAL_CACHE to force build cache in the current Thread.
// We don't use POST_INITIALIZED_EVENT, so there's no INITIALIZED event.
cache.start(PathChildrenCache.StartMode.BUILD_INITIAL_CACHE);
} catch (Exception e) {
LOG.error("Can't start PathChildrenCache", e);
}
}
druid 是通過 curator 的包,對 zookeeper 的幾點進行操作,首先他會注冊監聽節點的事件監聽器,監聽 CHILD_REMOVED
,CHILD_ADDED
,CONNECTION_RECONNECTED
這三個事件,當 CHILD_REMOVED
發生,會調用如下方法:
private void updateSingleNode(PathChildrenCacheEvent event, NodeEventTypeEnum type) {
ChildData data = event.getData();
String nodeName = getNodeName(data);
List<String> names = new ArrayList<String>();
names.add(getPrefix() + "." + nodeName);
Properties properties = getPropertiesFromChildData(data);
List<NodeEvent> events = NodeEvent.generateEvents(properties, names, type);
if (events.isEmpty()) {
return;
}
if (type == NodeEventTypeEnum.ADD) {
getProperties().putAll(properties);
} else {
for (String n : properties.stringPropertyNames()) {
getProperties().remove(n);
}
}
super.update(events);
}
這里傳入會包括 type
和 PathChildrenCacheEvent
, 接著我們根據 type
和 PathChildrenCacheEvent
生成 Druid
自己定義的 events
, 最后通知所有的觀察者。