前言
經過上一篇文章,對開機啟動動畫的流程梳理,引出了實際上在開機啟動動畫中,并沒有Activity,而是通過OpenGL es進行渲染,最后通過某種方式,把數據交給Android渲染系統。
本文,先來探索在調用OpenGL es進行渲染的前期準備。
如果遇到問題,可以來本文討論http://www.lxweimin.com/p/a2b5f82cf75f
正文
讓我們回憶一下,上一篇開機動畫OpenGL es 使用步驟,大致分為如下幾個:
- 1.SurfaceComposerClient::getBuiltInDisplay 從SF中查詢可用的物理屏幕
- 2.SurfaceComposerClient::getDisplayInfo 從SF中獲取屏幕的詳細信息
- 3.session()->createSurface 通過Client創建繪制平面控制中心
- 4.t.setLayer(control, 0x40000000) 設置當前layer的層級
- 5.control->getSurface 獲取真正的繪制平面對象
- 6.eglGetDisplay 獲取opengl es的默認主屏幕,加載OpenGL es
- 7.eglInitialize 初始化屏幕對象和著色器緩存
- 8.eglChooseConfig 自動篩選出最合適的配置
- 9.eglCreateWindowSurface 從Surface中創建一個opengl es的surface
- 10.eglCreateContext 創建當前opengl es 的上下文
- 11.eglQuerySurface 查找當前環境的寬高屬性
- 12.eglMakeCurrent 把上下文Context,屏幕display還有渲染面surface,線程關聯起來。
- 13.調用OpenGL es本身特性,繪制頂點,紋理等。
- eglSwapBuffers 交換繪制好的緩沖區
- 15.銷毀資源
我們就沿著這個邏輯看看在這個過程中Android的渲染系統在其中擔任了什么角色。
SurfaceComposerClient::getBuiltInDisplay
文件:/frameworks/native/libs/gui/SurfaceComposerClient.cpp
sp<IBinder> SurfaceComposerClient::getBuiltInDisplay(int32_t id) {
return ComposerService::getComposerService()->getBuiltInDisplay(id);
}
ComposerService本質上是ISurfaceComposer 一個BpBinder對象,對應著BnBinder對象是SF,也就到了SF的getBuiltInDisplay。
SF getBuiltInDisplay
sp<IBinder> SurfaceFlinger::getBuiltInDisplay(int32_t id) {
if (uint32_t(id) >= DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES) {
return nullptr;
}
return mBuiltinDisplays[id];
}
還記得我初始化第一篇聊過這個數據結構嗎?mBuiltinDisplays 將會持有根據每一個displayID也同時displayType持有一個BBinder作為核心。然而此時的BBinder只是一個通信基礎,還沒有任何處理命令的邏輯。我們需要看下面那個方法做了什么?
SurfaceComposerClient::getDisplayInfo
文件:/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
status_t SurfaceComposerClient::getDisplayConfigs(
const sp<IBinder>& display, Vector<DisplayInfo>* configs)
{
return ComposerService::getComposerService()->getDisplayConfigs(display, configs);
}
int SurfaceComposerClient::getActiveConfig(const sp<IBinder>& display) {
return ComposerService::getComposerService()->getActiveConfig(display);
}
status_t SurfaceComposerClient::getDisplayInfo(const sp<IBinder>& display,
DisplayInfo* info) {
Vector<DisplayInfo> configs;
status_t result = getDisplayConfigs(display, &configs);
if (result != NO_ERROR) {
return result;
}
int activeId = getActiveConfig(display);
if (activeId < 0) {
ALOGE("No active configuration found");
return NAME_NOT_FOUND;
}
*info = configs[static_cast<size_t>(activeId)];
return NO_ERROR;
}
該方法通過了兩次Binder通信進行屏幕數據的獲取,第一次getDisplayConfigs,如果成功則getDisplayConfigs獲取第二次。
SF getDisplayConfigs
status_t SurfaceFlinger::getDisplayConfigs(const sp<IBinder>& display,
Vector<DisplayInfo>* configs) {
...
int32_t type = NAME_NOT_FOUND;
for (int i=0 ; i<DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES ; i++) {
if (display == mBuiltinDisplays[i]) {
type = i;
break;
}
}
if (type < 0) {
return type;
}
// TODO: Not sure if display density should handled by SF any longer
class Density {
static int getDensityFromProperty(char const* propName) {
char property[PROPERTY_VALUE_MAX];
int density = 0;
if (property_get(propName, property, nullptr) > 0) {
density = atoi(property);
}
return density;
}
public:
static int getEmuDensity() {
return getDensityFromProperty("qemu.sf.lcd_density"); }
static int getBuildDensity() {
return getDensityFromProperty("ro.sf.lcd_density"); }
};
configs->clear();
ConditionalLock _l(mStateLock,
std::this_thread::get_id() != mMainThreadId);
for (const auto& hwConfig : getHwComposer().getConfigs(type)) {
DisplayInfo info = DisplayInfo();
float xdpi = hwConfig->getDpiX();
float ydpi = hwConfig->getDpiY();
//默認主屏幕的獲取DPI的法則
if (type == DisplayDevice::DISPLAY_PRIMARY) {
// The density of the device is provided by a build property
float density = Density::getBuildDensity() / 160.0f;
if (density == 0) {
// the build doesn't provide a density -- this is wrong!
// use xdpi instead
ALOGE("ro.sf.lcd_density must be defined as a build property");
density = xdpi / 160.0f;
}
if (Density::getEmuDensity()) {
xdpi = ydpi = density = Density::getEmuDensity();
density /= 160.0f;
}
info.density = density;
// TODO: this needs to go away (currently needed only by webkit)
sp<const DisplayDevice> hw(getDefaultDisplayDeviceLocked());
info.orientation = hw ? hw->getOrientation() : 0;
} else {
...
}
info.w = hwConfig->getWidth();
info.h = hwConfig->getHeight();
info.xdpi = xdpi;
info.ydpi = ydpi;
info.fps = 1e9 / hwConfig->getVsyncPeriod();
info.appVsyncOffset = vsyncPhaseOffsetNs;
info.presentationDeadline = hwConfig->getVsyncPeriod() -
sfVsyncPhaseOffsetNs + 1000000;
info.secure = true;
if (type == DisplayDevice::DISPLAY_PRIMARY &&
mPrimaryDisplayOrientation & DisplayState::eOrientationSwapMask) {
std::swap(info.w, info.h);
}
configs->push_back(info);
}
return NO_ERROR;
}
能看到這里BBinder實際并不是作為通信使用,而是作為對象標示。用來篩選出對應的屏幕的type是什么。
核心是下面這一段,先從HWComposer中獲取該id的屏幕的信息,并且保存在DisplayInfo。我們關注Density,也就是dpi是怎么計算的。
解釋一下dpi是什么,dpi是對角線每一個英寸下有多少像素。
計算就很簡單就是一個普通勾股定理即可。
其實這個數值是由ro.sf.lcd_density和qemu.sf.lcd_density屬性決定的。當然如果ro.sf.lcd_density沒有數值,則density則是由HWC的getConfigs的xdpi/160決定。最后找找qemu.sf.lcd_density,如果有數值,則xdpi,ydpi全部都是它,但是density則是qemu.sf.lcd_density數值/160.換句話說,qemu.sf.lcd_density這個LCD全局參數起了決定性的因素。
當然,沒有設置這兩個屬性,xdpi和ydpi則是默認的從HWC獲取出來的數據,density 為xdpi/160f。
當然此時還會判斷整個屏幕的橫豎狀態,最后在做一次寬高的顛倒。
HWComposer getConfigs
std::vector<std::shared_ptr<const HWC2::Display::Config>>
HWComposer::getConfigs(int32_t displayId) const {
RETURN_IF_INVALID_DISPLAY(displayId, {});
auto& displayData = mDisplayData[displayId];
auto configs = mDisplayData[displayId].hwcDisplay->getConfigs();
if (displayData.configMap.empty()) {
for (size_t i = 0; i < configs.size(); ++i) {
displayData.configMap[i] = configs[i];
}
}
return configs;
}
還記得在SF初始化中,當onHotPlugin進入到HWC之后,先添加到HWCDevice中,之后就會添加到mDisplayData中。其實就是HWC::Display對象。而這個對象在初始化的時候就會讀取對應配置保存起來。
文件:/frameworks/native/services/surfaceflinger/DisplayHardware/HWC2.cpp
void Display::loadConfigs()
{
ALOGV("[%" PRIu64 "] loadConfigs", mId);
std::vector<Hwc2::Config> configIds;
auto intError = mComposer.getDisplayConfigs(mId, &configIds);
auto error = static_cast<Error>(intError);
if (error != Error::None) {
return;
}
for (auto configId : configIds) {
loadConfig(configId);
}
}
void Display::loadConfig(hwc2_config_t configId)
{
ALOGV("[%" PRIu64 "] loadConfig(%u)", mId, configId);
auto config = Config::Builder(*this, configId)
.setWidth(getAttribute(configId, Attribute::Width))
.setHeight(getAttribute(configId, Attribute::Height))
.setVsyncPeriod(getAttribute(configId, Attribute::VsyncPeriod))
.setDpiX(getAttribute(configId, Attribute::DpiX))
.setDpiY(getAttribute(configId, Attribute::DpiY))
.build();
mConfigs.emplace(configId, std::move(config));
}
int32_t Display::getAttribute(hwc2_config_t configId, Attribute attribute)
{
int32_t value = 0;
auto intError = mComposer.getDisplayAttribute(mId, configId,
static_cast<Hwc2::IComposerClient::Attribute>(attribute),
&value);
auto error = static_cast<Error>(intError);
if (error != Error::None) {
ALOGE("getDisplayAttribute(%" PRIu64 ", %u, %s) failed: %s (%d)", mId,
configId, to_string(attribute).c_str(),
to_string(error).c_str(), intError);
return -1;
}
return value;
}
我們找到對應保存硬件的configId,最后通過getDisplayAttribute查找,每一個屬性是什么。
此時就會到Hal層中讀取屏幕信息。根據上兩節的UML圖就能知道本質上是通過hw_device_t和硬件進行通信,那么我們就繼續以msm8960為基準閱讀。
文件:/hardware/qcom/display/msm8960/libhwcomposer/hwc.cpp
int hwc_getDisplayAttributes(struct hwc_composer_device_1* dev, int disp,
uint32_t config, const uint32_t* attributes, int32_t* values) {
hwc_context_t* ctx = (hwc_context_t*)(dev);
//If hotpluggable displays are inactive return error
if(disp == HWC_DISPLAY_EXTERNAL && !ctx->dpyAttr[disp].connected) {
return -1;
}
//From HWComposer
static const uint32_t DISPLAY_ATTRIBUTES[] = {
HWC_DISPLAY_VSYNC_PERIOD,
HWC_DISPLAY_WIDTH,
HWC_DISPLAY_HEIGHT,
HWC_DISPLAY_DPI_X,
HWC_DISPLAY_DPI_Y,
HWC_DISPLAY_NO_ATTRIBUTE,
};
const int NUM_DISPLAY_ATTRIBUTES = (sizeof(DISPLAY_ATTRIBUTES) /
sizeof(DISPLAY_ATTRIBUTES)[0]);
for (size_t i = 0; i < NUM_DISPLAY_ATTRIBUTES - 1; i++) {
switch (attributes[i]) {
case HWC_DISPLAY_VSYNC_PERIOD:
values[i] = ctx->dpyAttr[disp].vsync_period;
break;
case HWC_DISPLAY_WIDTH:
values[i] = ctx->dpyAttr[disp].xres;
ALOGD("%s disp = %d, width = %d",__FUNCTION__, disp,
ctx->dpyAttr[disp].xres);
break;
case HWC_DISPLAY_HEIGHT:
values[i] = ctx->dpyAttr[disp].yres;
ALOGD("%s disp = %d, height = %d",__FUNCTION__, disp,
ctx->dpyAttr[disp].yres);
break;
case HWC_DISPLAY_DPI_X:
values[i] = (int32_t) (ctx->dpyAttr[disp].xdpi*1000.0);
break;
case HWC_DISPLAY_DPI_Y:
values[i] = (int32_t) (ctx->dpyAttr[disp].ydpi*1000.0);
break;
default:
ALOGE("Unknown display attribute %d",
attributes[i]);
return -EINVAL;
}
}
return 0;
}
其實這個時候就是檢測dpyAttr對應id中所有的ydpi,xdpi,xres,xdpi,vsync_period的信息。這個數組很熟悉,就是onHotPlugin的時候,通過uevent線程的socket回調上來的信息。
SF getActiveConfig
int SurfaceFlinger::getActiveConfig(const sp<IBinder>& display) {
if (display == nullptr) {
ALOGE("%s : display is nullptr", __func__);
return BAD_VALUE;
}
sp<const DisplayDevice> device(getDisplayDevice(display));
if (device != nullptr) {
return device->getActiveConfig();
}
return BAD_VALUE;
}
此時繼續在用BBinder作為key,找到DisplayDevice,使用DisplayDevice的getActiveConfig。而這個對象是什么?其實就是onHotPlugin的時候,調用setupNewDisplayDeviceInternal,裝載進來的參數。
if (state.type < DisplayDevice::DISPLAY_VIRTUAL) {
hw->setActiveConfig(getHwComposer().getActiveConfigIndex(state.type));
}
而這個參數還是調用了HWC的getActiveConfigIndex,從Hal中設置了活躍的ConfigId到DisplayDevice中。之后就能拿到這個活躍的ID了。
HWC的getActiveConfigIndex 本質上還是調用了HAL的getActiveConfig方法。而這個方法又是依賴setActiveConfig保存在HWC2On1Adapter::Display中。
什么時候設置呢?還記得我在WMS系列中聊過的RootWindowConatiner的嗎?它會調用performSurfacePlacement調用DisplayManagerService的performTraversal,通過SF設置當前活躍屏幕的id。它作為所有窗口的根窗口。同時在Activity onResume刷新界面之時,ViewRootImpl的performTraversals會調用聊到了WMS的relayout方法,這個方法刷新WMS中某個窗口的界面的時刻將會performSurfacePlacement。
通過這個方法,把WMS,DMS,SF全部串聯起來。
小結
思路有點跑遠了,getDisplayInfo實際做的事情拿到當前活躍的屏幕的屏幕信息。
SurfaceComposerClient createSurface
我們來回憶下,這個方法是怎么使用的:
sp<SurfaceControl> control = session()->createSurface(String8("BootAnimation"),
dinfo.w, dinfo.h, PIXEL_FORMAT_RGB_565);
能看到開機動畫設置Surface,設置了Surface的名字,寬高以及Surface的像素格式是RGB-565.
注意,這里是整個SF渲染畫面前期準備最為核心的步驟。
sp<SurfaceControl> SurfaceComposerClient::createSurface(
const String8& name,
uint32_t w,
uint32_t h,
PixelFormat format,
uint32_t flags,
SurfaceControl* parent,
int32_t windowType,
int32_t ownerUid)
{
sp<SurfaceControl> s;
createSurfaceChecked(name, w, h, format, &s, flags, parent, windowType, ownerUid);
return s;
}
status_t SurfaceComposerClient::createSurfaceChecked(
const String8& name,
uint32_t w,
uint32_t h,
PixelFormat format,
sp<SurfaceControl>* outSurface,
uint32_t flags,
SurfaceControl* parent,
int32_t windowType,
int32_t ownerUid)
{
sp<SurfaceControl> sur;
status_t err = mStatus;
if (mStatus == NO_ERROR) {
sp<IBinder> handle;
sp<IBinder> parentHandle;
sp<IGraphicBufferProducer> gbp;
if (parent != nullptr) {
parentHandle = parent->getHandle();
}
err = mClient->createSurface(name, w, h, format, flags, parentHandle,
windowType, ownerUid, &handle, &gbp);
ALOGE_IF(err, "SurfaceComposerClient::createSurface error %s", strerror(-err));
if (err == NO_ERROR) {
*outSurface = new SurfaceControl(this, handle, gbp, true /* owned */);
}
}
return err;
}
此時會調用SF的Client的createSurface創建一個SurfaceControl。能看到傳入了一個十分重要的對象IGraphicBufferProducer,這個對象就是圖元生產者。
Client createSurface
文件:/frameworks/native/services/surfaceflinger/Client.cpp
status_t Client::createSurface(
const String8& name,
uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
const sp<IBinder>& parentHandle, int32_t windowType, int32_t ownerUid,
sp<IBinder>* handle,
sp<IGraphicBufferProducer>* gbp)
{
sp<Layer> parent = nullptr;
if (parentHandle != nullptr) {
auto layerHandle = reinterpret_cast<Layer::Handle*>(parentHandle.get());
parent = layerHandle->owner.promote();
if (parent == nullptr) {
return NAME_NOT_FOUND;
}
}
if (parent == nullptr) {
bool parentDied;
parent = getParentLayer(&parentDied);
// If we had a parent, but it died, we've lost all
// our capabilities.
if (parentDied) {
return NAME_NOT_FOUND;
}
}
/*
* createSurface must be called from the GL thread so that it can
* have access to the GL context.
*/
class MessageCreateLayer : public MessageBase {
SurfaceFlinger* flinger;
Client* client;
sp<IBinder>* handle;
sp<IGraphicBufferProducer>* gbp;
status_t result;
const String8& name;
uint32_t w, h;
PixelFormat format;
uint32_t flags;
sp<Layer>* parent;
int32_t windowType;
int32_t ownerUid;
public:
MessageCreateLayer(SurfaceFlinger* flinger,
const String8& name, Client* client,
uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
sp<IBinder>* handle, int32_t windowType, int32_t ownerUid,
sp<IGraphicBufferProducer>* gbp,
sp<Layer>* parent)
: flinger(flinger), client(client),
handle(handle), gbp(gbp), result(NO_ERROR),
name(name), w(w), h(h), format(format), flags(flags),
parent(parent), windowType(windowType), ownerUid(ownerUid) {
}
status_t getResult() const { return result; }
virtual bool handler() {
result = flinger->createLayer(name, client, w, h, format, flags,
windowType, ownerUid, handle, gbp, parent);
return true;
}
};
sp<MessageBase> msg = new MessageCreateLayer(mFlinger.get(),
name, this, w, h, format, flags, handle,
windowType, ownerUid, gbp, &parent);
mFlinger->postMessageSync(msg);
return static_cast<MessageCreateLayer*>( msg.get() )->getResult();
}
該方法做了如下事情:
- 1.首先檢測當前的需要繪制的面Layer是否有父Layer。有則獲取parent的Layer。
- 2.構造一個Handler,等到下一個Loop才進行操作。這個操作就是通過SF調用createLayer創建一個Layer。注意這里繼續把Binder接口IGraphicBufferProducer繼續傳下去。
SF createLayer
文件:/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
status_t SurfaceFlinger::createLayer(
const String8& name,
const sp<Client>& client,
uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
int32_t windowType, int32_t ownerUid, sp<IBinder>* handle,
sp<IGraphicBufferProducer>* gbp, sp<Layer>* parent)
{
if (int32_t(w|h) < 0) {
ALOGE("createLayer() failed, w or h is negative (w=%d, h=%d)",
int(w), int(h));
return BAD_VALUE;
}
status_t result = NO_ERROR;
sp<Layer> layer;
String8 uniqueName = getUniqueLayerName(name);
switch (flags & ISurfaceComposerClient::eFXSurfaceMask) {
case ISurfaceComposerClient::eFXSurfaceNormal:
result = createBufferLayer(client,
uniqueName, w, h, flags, format,
handle, gbp, &layer);
break;
case ISurfaceComposerClient::eFXSurfaceColor:
result = createColorLayer(client,
uniqueName, w, h, flags,
handle, &layer);
break;
default:
result = BAD_VALUE;
break;
}
if (result != NO_ERROR) {
return result;
}
// window type is WINDOW_TYPE_DONT_SCREENSHOT from SurfaceControl.java
// TODO b/64227542
if (windowType == 441731) {
windowType = 2024; // TYPE_NAVIGATION_BAR_PANEL
layer->setPrimaryDisplayOnly();
}
layer->setInfo(windowType, ownerUid);
result = addClientLayer(client, *handle, *gbp, layer, *parent);
if (result != NO_ERROR) {
return result;
}
...
return result;
}
核心的邏輯分為2步驟:
- 1.createBufferLayer 創建圖層
- 2.addClientLayer 把圖層添加到Client
能看到在SF在這個時候會根據當前傳進來的type創建不同的Layer,分別是:
- 1.ISurfaceComposerClient::eFXSurfaceNormal 對應BufferLayer
- 2.ISurfaceComposerClient::eFXSurfaceColor 對應 ColorLayer
eFXSurfaceNormal = 0x00000000,
eFXSurfaceColor = 0x00020000,
eFXSurfaceMask = 0x000F0000,
分別分別是指這2個數值。在這個時候默認0,創建的是BufferLayer。那么這兩個Layer(圖層)有什么區別呢?其實ColorLayer一般不會使用,BufferLayer內置一套消費者生產者的圖元消費邏輯,能夠持續不斷的更新圖元。然而ColorLayer中沒有這些邏輯比較小巧,我們可以理解成一個無法變動的圖層。在現在的復雜的UI交互里面,用武之地比較少。
以后遇到再解析ColorLayer,我們需要集中精力給BufferLayer。
createBufferLayer 創建圖層
status_t SurfaceFlinger::createBufferLayer(const sp<Client>& client,
const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,
sp<IBinder>* handle, sp<IGraphicBufferProducer>* gbp, sp<Layer>* outLayer)
{
// initialize the surfaces
switch (format) {
case PIXEL_FORMAT_TRANSPARENT:
case PIXEL_FORMAT_TRANSLUCENT:
format = PIXEL_FORMAT_RGBA_8888;
break;
case PIXEL_FORMAT_OPAQUE:
format = PIXEL_FORMAT_RGBX_8888;
break;
}
sp<BufferLayer> layer = new BufferLayer(this, client, name, w, h, flags);
status_t err = layer->setBuffers(w, h, format, flags);
if (err == NO_ERROR) {
*handle = layer->getHandle();
*gbp = layer->getProducer();
*outLayer = layer;
}
return err;
}
這里會判斷傳進來的format,如果是需要設定透明色,則強制設置format為RGBA_8888模式。最后生成一個BufferLayer,把BufferLayer中的句柄以及圖元生產者返回客戶端(此時是SurfaceComposerClient中的SurfaceControl)。
Layer的初始化
文件:/frameworks/native/services/surfaceflinger/Layer.cpp
Layer::Layer(SurfaceFlinger* flinger, const sp<Client>& client, const String8& name, uint32_t w,
uint32_t h, uint32_t flags)
: contentDirty(false),
sequence(uint32_t(android_atomic_inc(&sSequence))),
mFlinger(flinger),
mPremultipliedAlpha(true),
mName(name),
mTransactionFlags(0),
mPendingStateMutex(),
mPendingStates(),
mQueuedFrames(0),
mSidebandStreamChanged(false),
mActiveBufferSlot(BufferQueue::INVALID_BUFFER_SLOT),
mCurrentTransform(0),
mOverrideScalingMode(-1),
mCurrentOpacity(true),
mCurrentFrameNumber(0),
mFrameLatencyNeeded(false),
mFiltering(false),
mNeedsFiltering(false),
mProtectedByApp(false),
mClientRef(client),
mPotentialCursor(false),
mQueueItemLock(),
mQueueItemCondition(),
mQueueItems(),
mLastFrameNumberReceived(0),
mAutoRefresh(false),
mFreezeGeometryUpdates(false),
mCurrentChildren(LayerVector::StateSet::Current),
mDrawingChildren(LayerVector::StateSet::Drawing) {
mCurrentCrop.makeInvalid();
uint32_t layerFlags = 0;
if (flags & ISurfaceComposerClient::eHidden) layerFlags |= layer_state_t::eLayerHidden;
if (flags & ISurfaceComposerClient::eOpaque) layerFlags |= layer_state_t::eLayerOpaque;
if (flags & ISurfaceComposerClient::eSecure) layerFlags |= layer_state_t::eLayerSecure;
mName = name;
mTransactionName = String8("TX - ") + mName;
mCurrentState.active.w = w;
mCurrentState.active.h = h;
mCurrentState.flags = layerFlags;
mCurrentState.active.transform.set(0, 0);
mCurrentState.crop.makeInvalid();
mCurrentState.finalCrop.makeInvalid();
mCurrentState.requestedFinalCrop = mCurrentState.finalCrop;
mCurrentState.requestedCrop = mCurrentState.crop;
mCurrentState.z = 0;
mCurrentState.color.a = 1.0f;
mCurrentState.layerStack = 0;
mCurrentState.sequence = 0;
mCurrentState.requested = mCurrentState.active;
mCurrentState.appId = 0;
mCurrentState.type = 0;
// drawing state & current state are identical
mDrawingState = mCurrentState;
const auto& hwc = flinger->getHwComposer();
const auto& activeConfig = hwc.getActiveConfig(HWC_DISPLAY_PRIMARY);
nsecs_t displayPeriod = activeConfig->getVsyncPeriod();
mFrameTracker.setDisplayRefreshPeriod(displayPeriod);
CompositorTiming compositorTiming;
flinger->getCompositorTiming(&compositorTiming);
mFrameEventHistory.initializeCompositorTiming(compositorTiming);
}
只需要知道它持有了HWC,flinger等對象即可。
BufferLayer的初始化
文件:/frameworks/native/services/surfaceflinger/BufferLayer.cpp
BufferLayer::BufferLayer(SurfaceFlinger* flinger, const sp<Client>& client, const String8& name,
uint32_t w, uint32_t h, uint32_t flags)
: Layer(flinger, client, name, w, h, flags),
mConsumer(nullptr),
mTextureName(UINT32_MAX),
mFormat(PIXEL_FORMAT_NONE),
mCurrentScalingMode(NATIVE_WINDOW_SCALING_MODE_FREEZE),
mBufferLatched(false),
mPreviousFrameNumber(0),
mUpdateTexImageFailed(false),
mRefreshPending(false) {
ALOGV("Creating Layer %s", name.string());
mFlinger->getRenderEngine().genTextures(1, &mTextureName);
mTexture.init(Texture::TEXTURE_EXTERNAL, mTextureName);
if (flags & ISurfaceComposerClient::eNonPremultiplied) mPremultipliedAlpha = false;
mCurrentState.requested = mCurrentState.active;
// drawing state & current state are identical
mDrawingState = mCurrentState;
}
這里面做了兩件比較重要的事情:
- 1.genTextures借助RenderEngine生成名字為mTextureName紋理對象
- 2.初始化Texture對象,綁定mTextureName。Texture是一個紋理矩陣的輔助類很簡單。
BufferLayer onFirstRef 設置圖元緩沖隊列
僅僅只是有BufferLayer還不夠,需要建立起一套生產者,消費者還需要更多東西。在實例化之后的onFirstRef才是真正的核心。
void BufferLayer::onFirstRef() {
// Creates a custom BufferQueue for SurfaceFlingerConsumer to use
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferConsumer> consumer;
BufferQueue::createBufferQueue(&producer, &consumer, true);
mProducer = new MonitoredProducer(producer, mFlinger, this);
mConsumer = new BufferLayerConsumer(consumer,
mFlinger->getRenderEngine(), mTextureName, this);
mConsumer->setConsumerUsageBits(getEffectiveUsage(0));
mConsumer->setContentsChangedListener(this);
mConsumer->setName(mName);
if (mFlinger->isLayerTripleBufferingDisabled()) {
mProducer->setMaxDequeuedBufferCount(2);
}
const sp<const DisplayDevice> hw(mFlinger->getDefaultDisplayDevice());
updateTransformHint(hw);
}
在Layer中,我們明確能看到消費者和生產者字樣。通過BufferQueue::createBufferQueue 創建核心的生產者和消費者之后最后包裝,暴露外面的對象如下:
- 1.IGraphicBufferProducer 圖元生產者對應MonitoredProducer
- 2.IGraphicBufferConsumer 圖元消費者對應BufferLayerConsumer
緊接著有一個核心的邏輯,圖元消費者設置了ContentsChangedListener監聽,當需要刷新的時候,將會回調這個接口讓消費者消費。
BufferQueue::createBufferQueue 創建核心的生產者和消費者
void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
sp<IGraphicBufferConsumer>* outConsumer,
bool consumerIsSurfaceFlinger) {
sp<BufferQueueCore> core(new BufferQueueCore());
sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core, consumerIsSurfaceFlinger));
sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
*outProducer = producer;
*outConsumer = consumer;
}
整個核心有3個對象:
- 1.BufferQueueCore 緩沖隊列
- 2.BufferQueueProducer 圖元生產者
- 3.BufferQueueConsumer 圖元消費者
BufferQueueCore 初始化
BufferQueueCore::BufferQueueCore() :
mMutex(),
mIsAbandoned(false),
mConsumerControlledByApp(false),
mConsumerName(getUniqueName()),
mConsumerListener(),
mConsumerUsageBits(0),
mConsumerIsProtected(false),
mConnectedApi(NO_CONNECTED_API),
mLinkedToDeath(),
mConnectedProducerListener(),
mSlots(),
mQueue(),
mFreeSlots(),
mFreeBuffers(),
mUnusedSlots(),
mActiveBuffers(),
mDequeueCondition(),
mDequeueBufferCannotBlock(false),
mDefaultBufferFormat(PIXEL_FORMAT_RGBA_8888),
mDefaultWidth(1),
mDefaultHeight(1),
mDefaultBufferDataSpace(HAL_DATASPACE_UNKNOWN),
mMaxBufferCount(BufferQueueDefs::NUM_BUFFER_SLOTS),
mMaxAcquiredBufferCount(1),
mMaxDequeuedBufferCount(1),
mBufferHasBeenQueued(false),
mFrameCounter(0),
mTransformHint(0),
mIsAllocating(false),
mIsAllocatingCondition(),
mAllowAllocation(true),
mBufferAge(0),
mGenerationNumber(0),
mAsyncMode(false),
mSharedBufferMode(false),
mAutoRefresh(false),
mSharedBufferSlot(INVALID_BUFFER_SLOT),
mSharedBufferCache(Rect::INVALID_RECT, 0, NATIVE_WINDOW_SCALING_MODE_FREEZE,
HAL_DATASPACE_UNKNOWN),
mLastQueuedSlot(INVALID_BUFFER_SLOT),
mUniqueId(getUniqueId())
{
int numStartingBuffers = getMaxBufferCountLocked();
for (int s = 0; s < numStartingBuffers; s++) {
mFreeSlots.insert(s);
}
for (int s = numStartingBuffers; s < BufferQueueDefs::NUM_BUFFER_SLOTS;
s++) {
mUnusedSlots.push_front(s);
}
}
int BufferQueueCore::getMaxBufferCountLocked() const {
int maxBufferCount = mMaxAcquiredBufferCount + mMaxDequeuedBufferCount +
((mAsyncMode || mDequeueBufferCannotBlock) ? 1 : 0);
// limit maxBufferCount by mMaxBufferCount always
maxBufferCount = std::min(mMaxBufferCount, maxBufferCount);
return maxBufferCount;
}
在Core中初始化了一個很重要Slot數組。我發現Android系統很喜歡Slot這種設計,rosalloc也是類似的設計。slot我暫時稱為插槽。
能看到在這個插槽中準備了如下大小的當前Layer最大能申請的圖元數以及最大入隊圖元數,此時兩個同步模式的標志位都為false,因此就實際上maxBufferCount為2。mMaxBufferCount為一個編譯常量64。
因此此時會設置大小為2的mFreeSlot,也就是2個大小空閑插槽。同時設置剩下62個為mUnusedSlots,是不使用的插槽。
這個插槽,我們能夠知道實際就是一個緩沖隊列,等待圖元插進來。
BufferQueueProducer 圖元生產者初始化
先來看看頭文件:
文件:/frameworks/native/libs/gui/include/gui/BufferQueueProducer.h
class BufferQueueProducer : public BnGraphicBufferProducer,
private IBinder::DeathRecipient
這個對象就是上面IGraphicBufferProducer,因此這個對象會在SF的Layer中存在一個,同時會傳遞給客戶端。
BufferQueueProducer::BufferQueueProducer(const sp<BufferQueueCore>& core,
bool consumerIsSurfaceFlinger) :
mCore(core),
mSlots(core->mSlots),
mConsumerName(),
mStickyTransform(0),
mConsumerIsSurfaceFlinger(consumerIsSurfaceFlinger),
mLastQueueBufferFence(Fence::NO_FENCE),
mLastQueuedTransform(0),
mCallbackMutex(),
mNextCallbackTicket(0),
mCurrentCallbackTicket(0),
mCallbackCondition(),
mDequeueTimeout(-1) {}
關鍵是把當前的Slot傳遞進來。
BufferQueueConsumer 圖元消費者初始化
文件:/frameworks/native/libs/gui/BufferQueueConsumer.cpp
BufferQueueConsumer::BufferQueueConsumer(const sp<BufferQueueCore>& core) :
mCore(core),
mSlots(core->mSlots),
mConsumerName() {}
這里也很簡單,持有了Slot緩沖隊列。接下來看看他的包裹類。
BufferLayerConsumer 初始化
文件:/frameworks/native/services/surfaceflinger/BufferLayerConsumer.cpp
BufferLayerConsumer::BufferLayerConsumer(const sp<IGraphicBufferConsumer>& bq,
RE::RenderEngine& engine, uint32_t tex, Layer* layer)
: ConsumerBase(bq, false),
mCurrentCrop(Rect::EMPTY_RECT),
mCurrentTransform(0),
mCurrentScalingMode(NATIVE_WINDOW_SCALING_MODE_FREEZE),
mCurrentFence(Fence::NO_FENCE),
mCurrentTimestamp(0),
mCurrentDataSpace(ui::Dataspace::UNKNOWN),
mCurrentFrameNumber(0),
mCurrentTransformToDisplayInverse(false),
mCurrentSurfaceDamage(),
mCurrentApi(0),
mDefaultWidth(1),
mDefaultHeight(1),
mFilteringEnabled(true),
mRE(engine),
mTexName(tex),
mLayer(layer),
mCurrentTexture(BufferQueue::INVALID_BUFFER_SLOT) {
memcpy(mCurrentTransformMatrix, mtxIdentity.asArray(), sizeof(mCurrentTransformMatrix));
mConsumer->setConsumerUsageBits(DEFAULT_USAGE_FLAGS);
}
它除了持有一個IGraphicBufferConsumer之外,還初始化了一個類型為mat4的mtxIdentity矩陣。如果熟悉著色器語言就知道這個的含義。它就是一個4*4矩陣。
文件:/frameworks/native/services/surfaceflinger/BufferLayerConsumer.h
class BufferLayerConsumer : public ConsumerBase
可以看到他是繼承了ConsumerBase,看看ConsumerBase初始化做了什么。
ConsumerBase 初始化
文件:/frameworks/native/libs/gui/ConsumerBase.cpp
ConsumerBase::ConsumerBase(const sp<IGraphicBufferConsumer>& bufferQueue, bool controlledByApp) :
mAbandoned(false),
mConsumer(bufferQueue),
mPrevFinalReleaseFence(Fence::NO_FENCE) {
// Choose a name using the PID and a process-unique ID.
mName = String8::format("unnamed-%d-%d", getpid(), createProcessUniqueId());
wp<ConsumerListener> listener = static_cast<ConsumerListener*>(this);
sp<IConsumerListener> proxy = new BufferQueue::ProxyConsumerListener(listener);
status_t err = mConsumer->consumerConnect(proxy, controlledByApp);
if (err != NO_ERROR) {
...
} else {
mConsumer->setConsumerName(mName);
}
}
在ConsumerBase初始化中把當前這個對象轉化為ConsumerListener,因為它繼承了ConsumerListener。同時mConsumer就是IGraphicBufferConsumer也就是上面的BufferQueueConsumer對象。把當前對象封裝成IConsumerListener,調用了consumerConnect注冊監聽,把行為鏈接到真正的消費者中。
BufferQueueConsumer consumerConnect
文件:rameworks/native/libs/gui/include/gui/BufferQueueConsumer.h
virtual status_t consumerConnect(const sp<IConsumerListener>& consumer,
bool controlledByApp) {
return connect(consumer, controlledByApp);
}
status_t BufferQueueConsumer::connect(
const sp<IConsumerListener>& consumerListener, bool controlledByApp) {
...
Mutex::Autolock lock(mCore->mMutex);
if (mCore->mIsAbandoned) {
...
return NO_INIT;
}
mCore->mConsumerListener = consumerListener;
mCore->mConsumerControlledByApp = controlledByApp;
return NO_ERROR;
}
此時就在BufferQueueCore中設置了消費者監聽回調。
BufferLayerConsumer setContentsChangedListener
接下來BufferLayerConsumer還需要注冊一個新的監聽是關于內容發生了變化也界面需要刷新的監聽。
文件:/frameworks/native/services/surfaceflinger/BufferLayerConsumer.cpp
void BufferLayerConsumer::setContentsChangedListener(const wp<ContentsChangedListener>& listener) {
setFrameAvailableListener(listener);
Mutex::Autolock lock(mMutex);
mContentsChangedListener = listener;
}
此時會調用ConsumeBase的setFrameAvailableListener
ConsumeBase setFrameAvailableListener
void ConsumerBase::setFrameAvailableListener(
const wp<FrameAvailableListener>& listener) {
Mutex::Autolock lock(mFrameAvailableMutex);
mFrameAvailableListener = listener;
}
這樣就完成了整個監聽的循環。類的嵌套太多,讓我畫一張UML圖來整理下。
總結一句話就是,因為FrameAvailableListener最終進入到BufferQueueCore中。當生產者生產了一個圖元的時候就會從core中獲取FrameAvailableListener調用監聽,進入到ConsumeBase中,進一步的回調到BufferLayer中。最后到BufferLayer和SF執行后面的繪制步驟。
addClientLayer
構造完Layer之后,就需要保存起來。
文件:/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
status_t SurfaceFlinger::addClientLayer(const sp<Client>& client,
const sp<IBinder>& handle,
const sp<IGraphicBufferProducer>& gbc,
const sp<Layer>& lbc,
const sp<Layer>& parent)
{
{
Mutex::Autolock _l(mStateLock);
...
if (parent == nullptr) {
mCurrentState.layersSortedByZ.add(lbc);
} else {
if (parent->isPendingRemoval()) {
ALOGE("addClientLayer called with a removed parent");
return NAME_NOT_FOUND;
}
parent->addChild(lbc);
}
if (gbc != nullptr) {
mGraphicBufferProducerList.insert(IInterface::asBinder(gbc).get());
...
}
mLayersAdded = true;
mNumLayers++;
}
client->attachLayer(handle, lbc);
return NO_ERROR;
}
如果當前的圖層Layer沒有任何父Layer,則存儲在mCurrentState的layersSortedByZ,也就是Z軸的最末尾,也就是當前渲染圖層的最上層。如果有就綁定給父Layer。
最后生產者隊列需要插入到mGraphicBufferProducerList全局集合中。
最后調用client的attachLayer把Client的Binder和生產者綁定起來。
Client attachLayer
void Client::attachLayer(const sp<IBinder>& handle, const sp<Layer>& layer)
{
Mutex::Autolock _l(mLock);
mLayers.add(handle, layer);
}
這樣也同時存儲在Client上。
SurfaceControl 的初始化
經過上面的流程,完成了整一套的圖元緩沖隊列的構造。現在讓我們回到SurfaceComposerClient中,繼續SurfaceControl的初始化。
文件:/frameworks/native/libs/gui/SurfaceControl.cpp
SurfaceControl::SurfaceControl(
const sp<SurfaceComposerClient>& client,
const sp<IBinder>& handle,
const sp<IGraphicBufferProducer>& gbp,
bool owned)
: mClient(client), mHandle(handle), mGraphicBufferProducer(gbp), mOwned(owned)
{
}
此時SurfaceControl同時持有了Client的Binder,圖元生產者以及SurfaceComposerClient服務。
SurfaceControl 生產Surface
當SurfaceControl有了之后,需要繪制像素,是繪制在SurfaceControl生成的Surface上。
sp<Surface> SurfaceControl::generateSurfaceLocked() const
{
// This surface is always consumed by SurfaceFlinger, so the
// producerControlledByApp value doesn't matter; using false.
mSurfaceData = new Surface(mGraphicBufferProducer, false);
return mSurfaceData;
}
sp<Surface> SurfaceControl::getSurface() const
{
Mutex::Autolock _l(mLock);
if (mSurfaceData == 0) {
return generateSurfaceLocked();
}
return mSurfaceData;
}
其實就是把圖元生產者設置到Surface中。
Surface的初始化
Surface才是面向我們客戶端,開發者的繪制圖層。我們不會直接操作圖元生產者。一切的事情都交給Surface來發送。這里面包含了很重要的圖元發送等邏輯。
class Surface
: public ANativeObjectBase<ANativeWindow, Surface, RefBase>
可以看到繼承了一個ANativeObjectBase模版類,這個模版類只是處理引用計數,不過設計的很精巧,可以學習。
template <typename NATIVE_TYPE, typename TYPE, typename REF,
typename NATIVE_BASE = android_native_base_t>
class ANativeObjectBase : public NATIVE_TYPE, public REF
{
public:
// Disambiguate between the incStrong in REF and NATIVE_TYPE
void incStrong(const void* id) const {
REF::incStrong(id);
}
void decStrong(const void* id) const {
REF::decStrong(id);
}
protected:
typedef ANativeObjectBase<NATIVE_TYPE, TYPE, REF, NATIVE_BASE> BASE;
ANativeObjectBase() : NATIVE_TYPE(), REF() {
NATIVE_TYPE::common.incRef = incRef;
NATIVE_TYPE::common.decRef = decRef;
}
static inline TYPE* getSelf(NATIVE_TYPE* self) {
return static_cast<TYPE*>(self);
}
static inline TYPE const* getSelf(NATIVE_TYPE const* self) {
return static_cast<TYPE const *>(self);
}
static inline TYPE* getSelf(NATIVE_BASE* base) {
return getSelf(reinterpret_cast<NATIVE_TYPE*>(base));
}
static inline TYPE const * getSelf(NATIVE_BASE const* base) {
return getSelf(reinterpret_cast<NATIVE_TYPE const*>(base));
}
static void incRef(NATIVE_BASE* base) {
ANativeObjectBase* self = getSelf(base);
self->incStrong(self);
}
static void decRef(NATIVE_BASE* base) {
ANativeObjectBase* self = getSelf(base);
self->decStrong(self);
}
};
使用了模版了決定了繼承關系。換句話說其實相當于一個Hook,在不改變設計結構下,增加了引用的特性。
ANativeWindow 結構體
文件:/frameworks/native/libs/nativewindow/include/system/window.h
struct ANativeWindow
{
#ifdef __cplusplus
ANativeWindow()
: flags(0), minSwapInterval(0), maxSwapInterval(0), xdpi(0), ydpi(0)
{
common.magic = ANDROID_NATIVE_WINDOW_MAGIC;
common.version = sizeof(ANativeWindow);
memset(common.reserved, 0, sizeof(common.reserved));
}
/* Implement the methods that sp<ANativeWindow> expects so that it
can be used to automatically refcount ANativeWindow's. */
void incStrong(const void* /*id*/) const {
common.incRef(const_cast<android_native_base_t*>(&common));
}
void decStrong(const void* /*id*/) const {
common.decRef(const_cast<android_native_base_t*>(&common));
}
#endif
struct android_native_base_t common;
/* flags describing some attributes of this surface or its updater */
const uint32_t flags;
/* min swap interval supported by this updated */
const int minSwapInterval;
/* max swap interval supported by this updated */
const int maxSwapInterval;
/* horizontal and vertical resolution in DPI */
const float xdpi;
const float ydpi;
intptr_t oem[4];
int (*setSwapInterval)(struct ANativeWindow* window,
int interval);
int (*dequeueBuffer_DEPRECATED)(struct ANativeWindow* window,
struct ANativeWindowBuffer** buffer);
int (*lockBuffer_DEPRECATED)(struct ANativeWindow* window,
struct ANativeWindowBuffer* buffer);
int (*queueBuffer_DEPRECATED)(struct ANativeWindow* window,
struct ANativeWindowBuffer* buffer);
int (*query)(const struct ANativeWindow* window,
int what, int* value);
int (*perform)(struct ANativeWindow* window,
int operation, ... );
int (*cancelBuffer_DEPRECATED)(struct ANativeWindow* window,
struct ANativeWindowBuffer* buffer);
int (*dequeueBuffer)(struct ANativeWindow* window,
struct ANativeWindowBuffer** buffer, int* fenceFd);
int (*queueBuffer)(struct ANativeWindow* window,
struct ANativeWindowBuffer* buffer, int fenceFd);
int (*cancelBuffer)(struct ANativeWindow* window,
struct ANativeWindowBuffer* buffer, int fenceFd);
};
實際上,我們就能看到不少線索,別看叫做Window,實際上ANativeWindow不是作為圖元存儲的結構體,能從結構體中的方法指針看得出,實際上ANativeWindow是用來控制ANativeWindowBuffer 像素緩存的。大致上有四個操作,queueBuffer 圖元入隊,dequeueBuffer 圖元出隊,lockBuffer 圖元鎖定,query圖元查找等。當然還有setSwapInterval交換緩沖。
我們再轉過頭看看整個Surface的初始化。
Surface::Surface(const sp<IGraphicBufferProducer>& bufferProducer, bool controlledByApp)
: mGraphicBufferProducer(bufferProducer),
mCrop(Rect::EMPTY_RECT),
mBufferAge(0),
mGenerationNumber(0),
mSharedBufferMode(false),
mAutoRefresh(false),
mSharedBufferSlot(BufferItem::INVALID_BUFFER_SLOT),
mSharedBufferHasBeenQueued(false),
mQueriedSupportedTimestamps(false),
mFrameTimestampsSupportsPresent(false),
mEnableFrameTimestamps(false),
mFrameEventHistory(std::make_unique<ProducerFrameEventHistory>()) {
// Initialize the ANativeWindow function pointers.
ANativeWindow::setSwapInterval = hook_setSwapInterval;
ANativeWindow::dequeueBuffer = hook_dequeueBuffer;
ANativeWindow::cancelBuffer = hook_cancelBuffer;
ANativeWindow::queueBuffer = hook_queueBuffer;
ANativeWindow::query = hook_query;
ANativeWindow::perform = hook_perform;
ANativeWindow::dequeueBuffer_DEPRECATED = hook_dequeueBuffer_DEPRECATED;
ANativeWindow::cancelBuffer_DEPRECATED = hook_cancelBuffer_DEPRECATED;
ANativeWindow::lockBuffer_DEPRECATED = hook_lockBuffer_DEPRECATED;
ANativeWindow::queueBuffer_DEPRECATED = hook_queueBuffer_DEPRECATED;
const_cast<int&>(ANativeWindow::minSwapInterval) = 0;
const_cast<int&>(ANativeWindow::maxSwapInterval) = 1;
mReqWidth = 0;
mReqHeight = 0;
mReqFormat = 0;
mReqUsage = 0;
mTimestamp = NATIVE_WINDOW_TIMESTAMP_AUTO;
mDataSpace = Dataspace::UNKNOWN;
mScalingMode = NATIVE_WINDOW_SCALING_MODE_FREEZE;
mTransform = 0;
mStickyTransform = 0;
mDefaultWidth = 0;
mDefaultHeight = 0;
mUserWidth = 0;
mUserHeight = 0;
mTransformHint = 0;
mConsumerRunningBehind = false;
mConnectedToCpu = false;
mProducerControlledByApp = controlledByApp;
mSwapIntervalZero = false;
}
在Surface初始化的時候,同時為每一個方法指針都賦值了,讓Surface擁有了操作的能力。
總結
關于BufferQueue 圖元緩沖隊列的初始化就到這里。在這個初始化流程中,初步的搭建了整個生產者-消費者模型。剩下的步驟就是生產圖元,寫入生產者,生產者把數據寫進緩沖隊列,通知消費者進行消費。
后面的步驟,我們慢慢再聊。老規矩用一幅圖總結整個流程。
總結一遍流程,本文總結了開機動畫1-5的步驟。
- getBuiltInDisplay 從BuiltInDisplay數組中獲取當前的屏幕
- getDisplayInfo 從SF中獲取活躍的屏幕信息
- createSurface 通過SF的Client對象創建了一個圖元生產者,并且賦值給SurfaceControl中。
- setLayer 設置layer 圖層在Z軸上的層級
- getSurface 通過SurfaceControl生產Surface對象,真正進行交互是Surface對象。
有了這些基礎之后,下一篇文章就來聊聊,Android在OpenGL es上的封裝。