Spring Cloud Ribbon屬于客戶端負載均衡,服務端負載均衡和客戶端負載均衡最大的不同點在于服務清單所存儲的位置。服務端的清單全部來自于服務注冊中心。通服務端負載均衡的架構類似,在客戶端負載均衡中也需要心跳去維護服務清單的健康性。
常用的方法
GET,POST,DELETE,PUT,HEADER,OPTIONS 常用請求如下:
xxxForEntity(String url, Class<T> responseType, Object... uriVariables)
xxxForEntity(String url, Class<T> responseType, Map<String, ?> uriVariables)
xxxForEntity(URI url, Class<T> responseType)
我們可以看到基本上就是這三種訪問方式
- 第一種url:請求的地址,responseType為請求響應體包裝的類型,uriVariables為url中的綁定的參數
源碼分析
我們可以查看@LoadBalance源碼
/**
* Annotation to mark a RestTemplate bean to be configured to use a LoadBalancerClient
* @author Spencer Gibb
*/
@Target({ ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Inherited
@Qualifier
public @interface LoadBalanced {
}
通過查看我們可以知道該注解是用來給RestTemplate做一個標記,然后使用負載均衡器LoadBalancerClient
查看LoadBalancerClient源碼
/**
* Represents a client side load balancer
* @author Spencer Gibb
*/
public interface LoadBalancerClient extends ServiceInstanceChooser {
/**
*
* 使用從負載均衡器中挑選出的服務實例來執行
* @param serviceId the service id to look up the LoadBalancer
* @param request allows implementations to execute pre and post actions such as
* incrementing metrics
* @return the result of the LoadBalancerRequest callback on the selected
* ServiceInstance
*/
<T> T execute(String serviceId, LoadBalancerRequest<T> request) throws IOException;
<T> T execute(String serviceId, ServiceInstance serviceInstance, LoadBalancerRequest<T> request) throws IOException;
URI reconstructURI(ServiceInstance instance, URI original);
}
從該接口中,我們可以清楚的知道
- 客戶端負載均衡器來執行請求內容
- 為系統構建一個合適的host:port形式的URI。
org.springframework.cloud.client.loadbalancer.LoadBalancerAutoConfiguration
通過內部靜態類LoadBalancerInterceptorConfig創建org.springframework.cloud.client.loadbalancer.LoadBalancerInterceptor
,該類被注入了LoadBalancerClient接口。
我們可以查看LoadBalancerAutoConfiguration類的源碼
/**
* Auto configuration for Ribbon (client side load balancing).
*
* @author Spencer Gibb
* @author Dave Syer
* @author Will Tran
* @author Gang Li
*/
@Configuration
@ConditionalOnClass(RestTemplate.class)
@ConditionalOnBean(LoadBalancerClient.class)
@EnableConfigurationProperties(LoadBalancerRetryProperties.class)
public class LoadBalancerAutoConfiguration {
.....省略........
}
從該類注釋我們可以知道該類為客戶端負載均衡器Ribbon提供自動配置,Ribbon實現的負載均衡自動化配置需要滿足下面兩個條件
- @ConditionalOnClass(RestTemplate.class)類路徑必須存在于當前路徑中
- @ConditionalOnBean(LoadBalancerClient.class)在Spring 的Bean工程中必須有LoadBalancerClient的實現Bean。
在該類中,主要做了下面三件事
- 創建了一個LoadBalancerInterceptor的Bean,用于實現對客戶端發起請求時進行攔截,以實現客戶端負載均衡
- 創建了一個RestTemplateCustomizer的Bean,用于給RestTemplate增加loadBalancerInterceptor攔截器
- 維護了一個被@LoadBalance注解修飾過的,并在這里進行初始化,通過調用RestTemplateCustomizer的實例來給需要客戶端負載均衡的RestTemplate增加LoadBalancerInterceptor。
????接下來我們看看LoadBalancerInterceptor攔截器是如何將一個普通的RestTemplate變成客戶端負載均衡的:
public class LoadBalancerInterceptor implements ClientHttpRequestInterceptor {
private LoadBalancerClient loadBalancer;
private LoadBalancerRequestFactory requestFactory;
public LoadBalancerInterceptor(LoadBalancerClient loadBalancer, LoadBalancerRequestFactory requestFactory) {
this.loadBalancer = loadBalancer;
this.requestFactory = requestFactory;
}
public LoadBalancerInterceptor(LoadBalancerClient loadBalancer) {
// for backwards compatibility
this(loadBalancer, new LoadBalancerRequestFactory(loadBalancer));
}
@Override
public ClientHttpResponse intercept(final HttpRequest request, final byte[] body,
final ClientHttpRequestExecution execution) throws IOException {
final URI originalUri = request.getURI();
String serviceName = originalUri.getHost();
Assert.state(serviceName != null, "Request URI does not contain a valid hostname: " + originalUri);
return this.loadBalancer.execute(serviceName, requestFactory.createRequest(request, body, execution));
}
}
通過源碼我們知道,當一個被@LoadBalance注解修飾過的RestTemplate對象向外發起Http請求時,會被LoadBalanceInterceptor類的interceptor方法攔截。由于我們在使用RestTemplate時采用了服務名做為host,所以直接從HttpRequest的URI對象中通過getHost()就可以拿到服務名,然后調用execute()方法去根據服務名來選擇實例并發起實際的請求。
分析到這里,LoadBalanceClient還只是一個抽象的負載均衡器接口,我們可以查看具體的實現,來進一步分析負載均衡的策略。通過源碼我們查找到 org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient
public class RibbonLoadBalancerClient implements LoadBalancerClient {
...........................省略...............................
@Override
public ServiceInstance choose(String serviceId) {
Server server = getServer(serviceId);
if (server == null) {
return null;
}
return new RibbonServer(serviceId, server, isSecure(server, serviceId),
serverIntrospector(serviceId).getMetadata(server));
}
@Override
public <T> T execute(String serviceId, LoadBalancerRequest<T> request) throws IOException {
ILoadBalancer loadBalancer = getLoadBalancer(serviceId);
Server server = getServer(loadBalancer);
if (server == null) {
throw new IllegalStateException("No instances available for " + serviceId);
}
RibbonServer ribbonServer = new RibbonServer(serviceId, server, isSecure(server,
serviceId), serverIntrospector(serviceId).getMetadata(server));
return execute(serviceId, ribbonServer, request);
}
@Override
public <T> T execute(String serviceId, ServiceInstance serviceInstance, LoadBalancerRequest<T> request) throws IOException {
Server server = null;
if(serviceInstance instanceof RibbonServer) {
server = ((RibbonServer)serviceInstance).getServer();
}
if (server == null) {
throw new IllegalStateException("No instances available for " + serviceId);
}
RibbonLoadBalancerContext context = this.clientFactory
.getLoadBalancerContext(serviceId);
RibbonStatsRecorder statsRecorder = new RibbonStatsRecorder(context, server);
try {
T returnVal = request.apply(serviceInstance);
statsRecorder.recordStats(returnVal);
return returnVal;
}
// catch IOException and rethrow so RestTemplate behaves correctly
catch (IOException ex) {
statsRecorder.recordStats(ex);
throw ex;
}
catch (Exception ex) {
statsRecorder.recordStats(ex);
ReflectionUtils.rethrowRuntimeException(ex);
}
return null;
}
private ServerIntrospector serverIntrospector(String serviceId) {
ServerIntrospector serverIntrospector = this.clientFactory.getInstance(serviceId,
ServerIntrospector.class);
if (serverIntrospector == null) {
serverIntrospector = new DefaultServerIntrospector();
}
return serverIntrospector;
}
private boolean isSecure(Server server, String serviceId) {
IClientConfig config = this.clientFactory.getClientConfig(serviceId);
ServerIntrospector serverIntrospector = serverIntrospector(serviceId);
return RibbonUtils.isSecure(config, serverIntrospector, server);
}
protected Server getServer(String serviceId) {
return getServer(getLoadBalancer(serviceId));
}
protected Server getServer(ILoadBalancer loadBalancer) {
if (loadBalancer == null) {
return null;
}
return loadBalancer.chooseServer("default"); // TODO: better handling of key
}
protected ILoadBalancer getLoadBalancer(String serviceId) {
return this.clientFactory.getLoadBalancer(serviceId);
}
public static class RibbonServer implements ServiceInstance {
private final String serviceId;
private final Server server;
private final boolean secure;
private Map<String, String> metadata;
}
...........................省略...............................
}
可以看到execute()方法中的getServer(loadBalancer)方法,從負載均衡器中獲取服務,返回Server對象,
protected Server getServer(ILoadBalancer loadBalancer) {
if (loadBalancer == null) {
return null;
}
return loadBalancer.chooseServer("default"); // TODO: better handling of key
}
可以看到getServer()方法傳入的是ILoadBalancer 接口,沒有使用LoadBalanceClient接口中的choose()方法,我們可以看看ILoadBalance接口的相關類圖
BaseLoadBalancer
實現了基礎的負載均衡,我們在此類中可以看到,如果我們不配置自己的負載均衡策略的話,默認使用了RoundRobinRule()實現輪訓負載均衡。而DynamicServerListLoadBalancer
和ZoneAwareLoadBalancer
在負載均衡的策略上做了一些功能的擴展。
getServer()方法返回的Server對象定義了一些傳統的服務端節點,該類中存儲了服務端節點的元數據,包括host ,port等。如下:
public class Server {
/**
* Additional meta information of a server, which contains
* information of the targeting application, as well as server identification
* specific for a deployment environment, for example, AWS.
*/
public static interface MetaInfo {
/**
* @return the name of application that runs on this server, null if not available
*/
public String getAppName();
/**
* @return the group of the server, for example, auto scaling group ID in AWS.
* Null if not available
*/
public String getServerGroup();
/**
* @return A virtual address used by the server to register with discovery service.
* Null if not available
*/
public String getServiceIdForDiscovery();
/**
* @return ID of the server
*/
public String getInstanceId();
}
public static final String UNKNOWN_ZONE = "UNKNOWN";
private String host;
private int port = 80;
private String scheme;
private volatile String id;
private volatile boolean isAliveFlag;
private String zone = UNKNOWN_ZONE;
private volatile boolean readyToServe = true;
private MetaInfo simpleMetaInfo = new MetaInfo() {
...省略get set...
};
那么我們在整合Ribbon的時候,Spring Cloud默認采用了那個具體的實現,我們可以通過RibbonClientConfiguration
配置類,可以知道在默認時采用了ZoneAwareLoadBalancer來實現負載均衡器。
@Bean
@ConditionalOnMissingBean
public ILoadBalancer ribbonLoadBalancer(IClientConfig config,
ServerList<Server> serverList, ServerListFilter<Server> serverListFilter,
IRule rule, IPing ping, ServerListUpdater serverListUpdater) {
if (this.propertiesFactory.isSet(ILoadBalancer.class, name)) {
return this.propertiesFactory.get(ILoadBalancer.class, config, name);
}
return new ZoneAwareLoadBalancer<>(config, rule, ping, serverList,
serverListFilter, serverListUpdater);
}
我們可以看到在返回的對象中有個rule參數,我們可以一直追蹤下去,直到看到如下代碼
/* Ignore null rules */
public void setRule(IRule rule) {
if (rule != null) {
this.rule = rule;
} else {
/* default rule */
this.rule = new RoundRobinRule();
}
if (this.rule.getLoadBalancer() != this) {
this.rule.setLoadBalancer(this);
}
}
這表示負載均衡器默認使用RoundRobinRule()規則,也就是我們所說的輪詢(輪詢的原理大家可以看源碼或者自行百度,很簡單)。
我們繼續看RibbonLoadBalancerClient
的execute方法,
在通過getServer(loadBalancer)的時候,也就是ZoneAwareLoadBalancer的choose()方法獲取了負載均衡策略分配到的服務實例對象Server后,將其包裝成RibbonServer對象(該對象除了存儲了服務實例的信息之外),然后使用該對象在回調LoadBalanceInterceptor請求攔截器中LoadBalanceRequest的apply(final ServiceInstance instance)函數,向一個實際的具體服務實例發起請求,從而實現一開始以服務名為host的URI請求到host:port形式的實際訪問地址的轉換。
負載均衡器
Spring Cloud 中使用LoadBalanceClient作為負載均衡器的通用接口,并且針對Ribbon實現了RibbonLoadBalancerClient,但是他在具體實現客戶端負載均衡時,時通過Ribbon的ILoadBalancer接口實現的,下面我們根據ILoadBalancer接口的實現類逐個看看它是如何實現客戶端負載均衡的。
com.netflix.loadbalancer.AbstractLoadBalancer
com.netflix.loadbalancer.BaseLoadBalancer
BaseLoadBalancer類是Ribbon負載均衡器的基礎實現類,在該類中定義了很多有關負載均衡器相關的內容。
- 定義并維護了兩個存儲服務實例Server對象的列表。一個用于存儲所有服務列表,一個用于存儲正常服務的實例清單。
@Monitor(name = PREFIX + "AllServerList", type = DataSourceType.INFORMATIONAL)
protected volatile List<Server> allServerList = Collections
.synchronizedList(new ArrayList<Server>());
@Monitor(name = PREFIX + "UpServerList", type = DataSourceType.INFORMATIONAL)
protected volatile List<Server> upServerList = Collections
.synchronizedList(new ArrayList<Server>());
- 定義了之前我們提到的用來存儲負載均衡器各服務實例屬性和統計信息的LoadBalancerStats對象。
- 定義了檢查服務實例是否正常服務的IPing對象,在BaseLoadBalancer中默認為null,需要在構造時注入它的具體實現。
- 定義了檢查服務實例操作的執行策略對象IPingStrategy接口,在BaseLoadBalancer中默認使用了該類中定義的靜態內部類SerialPingStrategy實現。根據源碼,我們可以看到該策略采用了順序遍歷ping服務實例的方法實現檢查。該策略在當IPing的實現速度慢,或者Server列表過大時,可能會影響系統性能,這時候需要通過實現IPingStrategy接口并重寫pingServer(IPing ping,Server[] servers)函數去擴展ping的執行策略。
/**
* Default implementation for <c>IPingStrategy</c>, performs ping
* serially, which may not be desirable, if your <c>IPing</c>
* implementation is slow, or you have large number of servers.
*/
private static class SerialPingStrategy implements IPingStrategy {
@Override
public boolean[] pingServers(IPing ping, Server[] servers) {
int numCandidates = servers.length;
boolean[] results = new boolean[numCandidates];
logger.debug("LoadBalancer: PingTask executing [{}] servers configured", numCandidates);
for (int i = 0; i < numCandidates; i++) {
results[i] = false; /* Default answer is DEAD. */
try {
// NOTE: IFF we were doing a real ping
// assuming we had a large set of servers (say 15)
// the logic below will run them serially
// hence taking 15 times the amount of time it takes
// to ping each server
// A better method would be to put this in an executor
// pool
// But, at the time of this writing, we dont REALLY
// use a Real Ping (its mostly in memory eureka call)
// hence we can afford to simplify this design and run
// this
// serially
if (ping != null) {
results[i] = ping.isAlive(servers[i]);
}
} catch (Exception e) {
logger.error("Exception while pinging Server: '{}'", servers[i], e);
}
}
return results;
}
}
- 定義了負載均衡的處理規則IRule對象,從BaseLoadBalancer中chooseServer(Object key)的實現源碼,我們可以知道,負載均衡器實際將服務實例委托給了IRule實例中的choose函數。而在這里,默認初始化了RoundRobbinRule為IRule的實現對象。RoundRobbinRule實現了最基本的線性負載均衡規則。
public Server choose(ILoadBalancer lb, Object key) {
if (lb == null) {
log.warn("no load balancer");
return null;
}
Server server = null;
int count = 0;
while (server == null && count++ < 10) {
List<Server> reachableServers = lb.getReachableServers();
List<Server> allServers = lb.getAllServers();
int upCount = reachableServers.size();
int serverCount = allServers.size();
if ((upCount == 0) || (serverCount == 0)) {
log.warn("No up servers available from load balancer: " + lb);
return null;
}
int nextServerIndex = incrementAndGetModulo(serverCount);
server = allServers.get(nextServerIndex);
if (server == null) {
/* Transient. */
Thread.yield();
continue;
}
if (server.isAlive() && (server.isReadyToServe())) {
return (server);
}
// Next.
server = null;
}
if (count >= 10) {
log.warn("No available alive servers after 10 tries from load balancer: "
+ lb);
}
return server;
}
- 啟動ping 任務:在BaseLoadBalancer的默認構造函數中,會直接啟動給一個用于定時檢測Server是否健康的服務。該任務時間間隔為10s
void setupPingTask() {
if (canSkipPing()) {
return;
}
if (lbTimer != null) {
lbTimer.cancel();
}
lbTimer = new ShutdownEnabledTimer("NFLoadBalancer-PingTimer-" + name,
true);
lbTimer.schedule(new PingTask(), 0, pingIntervalSeconds * 1000);
forceQuickPing();
}
我們繼續追蹤源碼可以看到,
/**
* TimerTask that keeps runs every X seconds to check the status of each
* server/node in the Server List
*
* @author stonse
*
*/
class PingTask extends TimerTask {
public void run() {
try {
new Pinger(pingStrategy).runPinger();
} catch (Exception e) {
logger.error("LoadBalancer [{}]: Error pinging", name, e);
}
}
}
負載均衡策略
我們解讀一下IRule接口。
com.netflix.loadbalancer.AbstractLoadBalancerRule
該類中定義了負載均衡器的抽象接口ILoadBalancer對象,該對象在具體選擇服務策略時,獲取到一寫負載均衡器中維護的信息作為分配依據,并以此設計一些算法來實現特定場景的高效策略。
/**
* Class that provides a default implementation for setting and getting load balancer
* @author stonse
*
*/
public abstract class AbstractLoadBalancerRule implements IRule, IClientConfigAware {
private ILoadBalancer lb;
@Override
public void setLoadBalancer(ILoadBalancer lb){
this.lb = lb;
}
@Override
public ILoadBalancer getLoadBalancer(){
return lb;
}
}
如果我們不滿足系統默認的集中實現,也可以自定義ILoadBalancer,很簡單,如下操作,自定義實現的原理后面再說。
/**
* 在這里自定義自己的LoadBalancer
* 如果需要使用自定義的LoadBalancer,在此類上加上@Configuration,
* 返回的ILoadBalancer是實現了ILoadBalancer接口的實現類即可
*/
@Configuration
public class LoadBalancerConfiguration {
@Bean
public ILoadBalancer myLoadBalancer(){
return new TestLoadBalnacer();
}
class TestLoadBalnacer implements ILoadBalancer{
......省略......
}
}
com.netflix.loadbalancer.RandomRule
/**
* A loadbalacing strategy that randomly distributes traffic amongst existing
* servers.
*
* @author stonse
*
*/
public class RandomRule extends AbstractLoadBalancerRule
從該類的命名和注釋我們可以知道該類是一個負載均衡策略,一種從服務實例清單中隨機選擇一個服務實例的功能。我們可以看到該類實現了抽象類的choose(Object key)方法,委托給了該類中的choose(ILoadBalancer lb, Object key) 方法,該對象多了一個負載均衡器參數ILoadBalancer 。該負載均衡器對象提供一個可用的服務實例列表lb.getReachableServers()
和所有服務實例列表lb.getAllServers()
,通過
int index = rand.nextInt(serverCount)
獲得一個隨機數,通過隨機數在獲取服務Server server = upList.get(index)
。
/**
* Randomly choose from all living servers
*/
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "RCN_REDUNDANT_NULLCHECK_OF_NULL_VALUE")
public Server choose(ILoadBalancer lb, Object key) {
if (lb == null) {
return null;
}
Server server = null;
while (server == null) {
if (Thread.interrupted()) {
return null;
}
List<Server> upList = lb.getReachableServers();
List<Server> allList = lb.getAllServers();
int serverCount = allList.size();
if (serverCount == 0) {
/*
* No servers. End regardless of pass, because subsequent passes
* only get more restrictive.
*/
return null;
}
int index = rand.nextInt(serverCount);
server = upList.get(index);
if (server == null) {
/*
* The only time this should happen is if the server list were
* somehow trimmed. This is a transient condition. Retry after
* yielding.
*/
Thread.yield();
continue;
}
if (server.isAlive()) {
return (server);
}
// Shouldn't actually happen.. but must be transient or a bug.
server = null;
Thread.yield();
}
return server;
}
com.netflix.loadbalancer.RoundRobinRule
最廣為人知和最基本的負載平衡策略,即輪詢規則。該策略實現了按照線性輪詢的方式依次選擇每個服務實例的功能。
/**
* The most well known and basic load balancing strategy, i.e. Round Robin Rule.
*
* @author stonse
* @author Nikos Michalakis <nikos@netflix.com>
*
*/
public class RoundRobinRule extends AbstractLoadBalancerRule {
/**
* Inspired by the implementation of {@link AtomicInteger#incrementAndGet()}.
*
* @param modulo The modulo to bound the value of the counter.
* @return The next value.
*/
private int incrementAndGetModulo(int modulo) {
for (;;) {
int current = nextServerCyclicCounter.get();
int next = (current + 1) % modulo;
if (nextServerCyclicCounter.compareAndSet(current, next))
return next;
}
}
public Server choose(ILoadBalancer lb, Object key) {
if (lb == null) {
log.warn("no load balancer");
return null;
}
Server server = null;
int count = 0;
while (server == null && count++ < 10) {
List<Server> reachableServers = lb.getReachableServers();
List<Server> allServers = lb.getAllServers();
int upCount = reachableServers.size();
int serverCount = allServers.size();
if ((upCount == 0) || (serverCount == 0)) {
log.warn("No up servers available from load balancer: " + lb);
return null;
}
int nextServerIndex = incrementAndGetModulo(serverCount);
server = allServers.get(nextServerIndex);
if (server == null) {
/* Transient. */
Thread.yield();
continue;
}
if (server.isAlive() && (server.isReadyToServe())) {
return (server);
}
// Next.
server = null;
}
if (count >= 10) {
log.warn("No available alive servers after 10 tries from load balancer: "
+ lb);
}
return server;
}
}
可以看到這里實現的方式和RandomRule很類似,區別在于選擇服務的邏輯上。這里采用輪詢方式選擇服務,輪詢的原理這里就不在贅述了。不清楚的,百度一下,這里給出相關代碼,自己領悟
com.netflix.loadbalancer.RetryRule
帶有重試機制的實例選擇功能的負載均衡策略。
關鍵代碼如下:
/*
* Loop if necessary. Note that the time CAN be exceeded depending on the
* subRule, because we're not spawning additional threads and returning
* early.
*/
public Server choose(ILoadBalancer lb, Object key) {
long requestTime = System.currentTimeMillis();
long deadline = requestTime + maxRetryMillis;
Server answer = null;
answer = subRule.choose(key);
if (((answer == null) || (!answer.isAlive()))
&& (System.currentTimeMillis() < deadline)) {
InterruptTask task = new InterruptTask(deadline
- System.currentTimeMillis());
while (!Thread.interrupted()) {
answer = subRule.choose(key);
if (((answer == null) || (!answer.isAlive()))
&& (System.currentTimeMillis() < deadline)) {
/* pause and retry hoping it's transient */
Thread.yield();
} else {
break;
}
}
task.cancel();
}
if ((answer == null) || (!answer.isAlive())) {
return null;
} else {
return answer;
}
}
com.netflix.loadbalancer.WeightedResponseTimeRule
該策略集成RoundRobinRule,增加了根據實例的運行情況來計算權重,并根據權重來選擇實例,已達到最佳的選擇效果。有興趣的讀者自己可以查看源碼。這里不做過多分析。
com.netflix.loadbalancer.RetryRule
com.netflix.loadbalancer.ClientConfigEnabledRoundRobinRule
/**
* This class essentially contains the RoundRobinRule class defined in the
* loadbalancer package
*
* @author stonse
*
*/
public class ClientConfigEnabledRoundRobinRule extends AbstractLoadBalancerRule {
RoundRobinRule roundRobinRule = new RoundRobinRule();
@Override
public void initWithNiwsConfig(IClientConfig clientConfig) {
roundRobinRule = new RoundRobinRule();
}
@Override
public void setLoadBalancer(ILoadBalancer lb) {
super.setLoadBalancer(lb);
roundRobinRule.setLoadBalancer(lb);
}
@Override
public Server choose(Object key) {
if (roundRobinRule != null) {
return roundRobinRule.choose(key);
} else {
throw new IllegalArgumentException(
"This class has not been initialized with the RoundRobinRule class");
}
}
}
該類內部定義了RoundRobinRule策略,內部的choose()函數也是使用了RoundRobinRule 的choose()函數實現的線性輪詢,所以該類實現的功能實際和RoundRobinRule一樣。一般情況下,我們會繼承該類,在子類中做一些高級策略時通常有可能會存在一些無法實施的情況,那么久可以用父類的實現,這里只的就是該類的choose()函數,
com.netflix.loadbalancer.BestAvailableRule
該策略繼承ClientConfigEnabledRoundRobinRule ,在實現中注入負載均衡器統計對象LoadBalancerStats ,在該類的choose()函數中,我們可以發現,if (loadBalancerStats == null) { return super.choose(key);}
,這就是繼承ClientConfigEnabledRoundRobinRule 的用處所在。我們可以看到該策略拿到負載均衡器中的所有服務實例,然后過濾掉有故障的實例,在找出并發數最小的服務實例。所以該策略是找出服務器中最空閑的服務實例。
/**
* A rule that skips servers with "tripped" circuit breaker and picks the
* server with lowest concurrent requests.
* <p>
* This rule should typically work with {@link ServerListSubsetFilter} which puts a limit on the
* servers that is visible to the rule. This ensure that it only needs to find the minimal
* concurrent requests among a small number of servers. Also, each client will get a random list of
* servers which avoids the problem that one server with the lowest concurrent requests is
* chosen by a large number of clients and immediately gets overwhelmed.
*
* @author awang
*
*/
public class BestAvailableRule extends ClientConfigEnabledRoundRobinRule {
private LoadBalancerStats loadBalancerStats;
@Override
public Server choose(Object key) {
if (loadBalancerStats == null) {
return super.choose(key);
}
List<Server> serverList = getLoadBalancer().getAllServers();
int minimalConcurrentConnections = Integer.MAX_VALUE;
long currentTime = System.currentTimeMillis();
Server chosen = null;
for (Server server: serverList) {
ServerStats serverStats = loadBalancerStats.getSingleServerStat(server);
if (!serverStats.isCircuitBreakerTripped(currentTime)) {
int concurrentConnections = serverStats.getActiveRequestsCount(currentTime);
if (concurrentConnections < minimalConcurrentConnections) {
minimalConcurrentConnections = concurrentConnections;
chosen = server;
}
}
}
if (chosen == null) {
return super.choose(key);
} else {
return chosen;
}
}
@Override
public void setLoadBalancer(ILoadBalancer lb) {
super.setLoadBalancer(lb);
if (lb instanceof AbstractLoadBalancer) {
loadBalancerStats = ((AbstractLoadBalancer) lb).getLoadBalancerStats();
}
}
}
com.netflix.loadbalancer.PredicateBasedRule
可以看到該類是一個抽象類,里面包含抽象方法public abstract AbstractServerPredicate getPredicate();
該類通過子類實現父類的抽象方法來提供過濾功能,然后在以線性輪詢的方式從過濾后的清單中選出一個實例。所以至于該類如何過濾完全有子類實現。
com.netflix.loadbalancer.AvailabilityFilteringRule
可以看到該類繼承了PredicateBasedRule 類并實現了抽象方法完成過濾功能。其中過濾邏輯如下:
@Override
public boolean apply(@Nullable PredicateKey input) {
LoadBalancerStats stats = getLBStats();
if (stats == null) {
return true;
}
return !shouldSkipServer(stats.getSingleServerStat(input.getServer()));
}
private boolean shouldSkipServer(ServerStats stats) {
if ((CIRCUIT_BREAKER_FILTERING.get() && stats.isCircuitBreakerTripped())
|| stats.getActiveRequestsCount() >= activeConnectionsLimit.get()) {
return true;
}
return false;
}
在choose方法中,我們可以看到,該策略先線性輪詢先選一個,在判斷選出的實例是否滿足條件(AvailabilityPredicate的apply()方法
)來選出可用且比較空閑的實例。
通過查看源碼我們知道,該邏輯判斷了當前服務的斷路器是否生效打開,實例的并發請求書是否大于Integer.MAX_VALUE
也就是231-1,如果有一項滿足條件,就便是該節點可能存在故障活著負載過高,所以apply方法返回false,在父類的choose()方法中,如果方法false,就累加計數器,如果計數器大于等于10,就調用父類線性輪詢。
public class AvailabilityFilteringRule extends PredicateBasedRule {
private AbstractServerPredicate predicate;
public AvailabilityFilteringRule() {
super();
predicate = CompositePredicate.withPredicate(new AvailabilityPredicate(this, null))
.addFallbackPredicate(AbstractServerPredicate.alwaysTrue())
.build();
}
@Override
public void initWithNiwsConfig(IClientConfig clientConfig) {
predicate = CompositePredicate.withPredicate(new AvailabilityPredicate(this, clientConfig))
.addFallbackPredicate(AbstractServerPredicate.alwaysTrue())
.build();
}
@Monitor(name="AvailableServersCount", type = DataSourceType.GAUGE)
public int getAvailableServersCount() {
ILoadBalancer lb = getLoadBalancer();
List<Server> servers = lb.getAllServers();
if (servers == null) {
return 0;
}
return Collections2.filter(servers, predicate.getServerOnlyPredicate()).size();
}
/**
* This method is overridden to provide a more efficient implementation which does not iterate through
* all servers. This is under the assumption that in most cases, there are more available instances
* than not.
*/
@Override
public Server choose(Object key) {
int count = 0;
Server server = roundRobinRule.choose(key);
while (count++ <= 10) {
if (predicate.apply(new PredicateKey(server))) {
return server;
}
server = roundRobinRule.choose(key);
}
return super.choose(key);
}
@Override
public AbstractServerPredicate getPredicate() {
return predicate;
}
}
com.netflix.loadbalancer.ZoneAvoidanceRule
該策略模式使用了com.netflix.loadbalancer.CompositePredicate
來過濾服務實例清單,這是一個組合顧慮條件,如圖
主濾條件
ZoneAvoidancePredicate
:次過濾條件
AvailabilityPredicate
,查看CompositePredicate.withPredicates(p1, p2) .addFallbackPredicate(p2).addFallbackPredicate(AbstractServerPredicate.alwaysTrue()).build();當中的addFallbackPredicate(p2)方法,我們發現次方法返回的是一個Builder.List<AbstractServerPredicate>,所以我們斷定次過濾條件可用有多個且是按順序執行的。
自動化配置
在引入Spring CloudRibbon之后,能夠自動化構建下面的接口。
- IClientConfig:Ribbon的客戶端,默認采用com.netflix.client.config.DefaultClientConfigImpl實現。
- IRule:負載均衡器默認實現ZoneAvoidanceRule,該策略能夠在多區域環境下選出最佳的實例進行訪問。
- ServerList<Server>:服務實例清單維護機制,默認實現ConfigurationBasedServerList。
-ILoadBalancer:負載均衡器,默認采用ZoneAwareLoadBalancer實現,具備區域感知能力。
上面的自動化配置僅在沒有引入Spring Cloud Eureka等服務治理框架的時候如此,在同時引入Eureka和Ribbon依賴時,自動化會有寫不同。
???????????通過自動化配置的實現。我們可以輕松的實現客戶端的負載均衡,同時我們也可以替換這些默認實現。只需要在Spring Boot應用中創建對應的實現類覆蓋默認的實現方法即可。比如下面的內容,
/**
* 在這里自定義自己的LoadBalancer
* 如果需要使用自定義的LoadBalancer,在此類上加上@Configuration,
* 返回的ILoadBalancer是實現了ILoadBalancer接口的實現類即可
*/
@Configuration
public class LoadBalancerConfiguration {
@Bean
public ILoadBalancer myLoadBalancer(){
return new TestLoadBalnacer();
}
}
另外也可以使用@RibbonClient注解來實現更細粒度的客戶端配置,比如下面的代碼實現了為hello-service服務使用HelloServiceConfiguration中的配置
@Configuration
@RibbonClient(name = "hello-service",configuration = HelloServiceConfiguration.class)
public class RibbonConfiguration {
}