Android圖形渲染原理上

對于Android開發(fā)者來說,我們或多或少有了解過Android圖像顯示的知識(shí)點(diǎn),剛剛學(xué)習(xí)Android開發(fā)的人會(huì)知道,在Actvity的onCreate方法中設(shè)置我們的View后,再經(jīng)過onMeasure,onLayout,onDraw的流程,界面就顯示出來了;對Android比較熟悉的開發(fā)者會(huì)知道,onDraw流程分為軟件繪制和硬件繪制兩種模式,軟繪是通過調(diào)用Skia來操作,硬繪是通過調(diào)用Opengl ES來操作;對Android非常熟悉的開發(fā)者會(huì)知道繪制出來的圖形數(shù)據(jù)最終都通過GraphiBuffer內(nèi)共享內(nèi)存?zhèn)鬟f給SurfaceFlinger去做圖層混合,圖層混合完成后將圖形數(shù)據(jù)送到幀緩沖區(qū),于是,圖形就在我們的屏幕顯示出來了。

但我們所知道的Activity或者是應(yīng)用App界面的顯示,只屬于Android圖形顯示的一部分。同樣可以在Android系統(tǒng)上展示圖像的WebView,F(xiàn)lutter,或者是通過Unity開發(fā)的3D游戲,他們的界面又是如何被繪制和顯現(xiàn)出來的呢?他們和我們所熟悉的Acitvity的界面顯示又有什么異同點(diǎn)呢?我們可以不借助Activity的setView或者InflateView機(jī)制來實(shí)現(xiàn)在屏幕上顯示出我們想要的界面嗎?Android系統(tǒng)顯示界面的方式又和IOS,或者Windows等系統(tǒng)有什么區(qū)別呢?……

去探究這些問題,比僅僅知道Acitvity的界面是如何顯示出來更加的有價(jià)值,因?yàn)橄胍卮疬@些問題,就需要我們真正的掌握Android圖像顯示的底層原理,當(dāng)我們掌握了底層的顯示原理后,我們會(huì)發(fā)現(xiàn)WebView,F(xiàn)lutter或者未來會(huì)出現(xiàn)的各種新的圖形顯示技術(shù),原來都是大同小異。

我會(huì)花三篇文章的篇幅,去深入的講解Android圖形顯示的原理,OpenGL ES和Skia的繪制圖像的方式,他們?nèi)绾问褂茫约八麄冊贏ndroid中的使用場景,如開機(jī)動(dòng)畫,Activity界面的軟件繪制和硬件繪制,以及Flutter的界面繪制。那么,我們開始對Android圖像顯示原理的探索吧。

屏幕圖像顯示原理

在講解Android圖像的顯示之前,我會(huì)先講一下屏幕圖像的顯示原理,畢竟我們圖像,最終都是在手機(jī)屏幕上顯示出來的,了解這一塊的知識(shí)會(huì)讓我們更容易的理解Android在圖像顯示上的機(jī)制。

圖像顯示的完整過程,分為下面幾個(gè)階段:

圖像數(shù)據(jù)→CPU→顯卡驅(qū)動(dòng)→顯卡(GPU)→顯存(幀緩沖)→顯示器

我詳細(xì)介紹一下這幾個(gè)階段:

  1. CPU→顯卡驅(qū)動(dòng):CPU通過軟件繪制將繪制好的內(nèi)容,或者直接將繪制指令(硬件加速),提交給顯卡驅(qū)動(dòng) 。

  2. 顯卡驅(qū)動(dòng)→GPU:顯卡驅(qū)動(dòng)是硬件廠商編寫的,能將接收到的渲染命令翻譯成GPU能夠理解的語言,所以顯卡驅(qū)動(dòng)是Opengl或者DirectX等圖形編程接口的具體實(shí)現(xiàn)。我們可以將它理解成其他模塊和顯卡溝通的入口。

  3. GPU→幀緩沖:顯卡對數(shù)據(jù)進(jìn)行頂點(diǎn)處理,剪裁,光柵化等操作流程,再經(jīng)過圖層混合后將最終的數(shù)據(jù)提交到顯存。

  4. 顯存→顯示器:如果顯卡是VGA接口類型,則需要將數(shù)字信號轉(zhuǎn)換為模型信號后才能送到顯示屏,如果是HDMI或者DVI類型的接口,則可以直接將數(shù)字信號送到顯示器。

實(shí)際上顯卡驅(qū)動(dòng),顯卡和顯存,包括數(shù)模轉(zhuǎn)換模塊都是屬于顯卡的模塊。但為了能能詳細(xì)的講解經(jīng)歷的步驟,這里做了拆分。

當(dāng)顯存中有數(shù)據(jù)后,顯示器又是怎么根據(jù)顯存里面的數(shù)據(jù)來進(jìn)行界面的顯示的呢?這里以LCD液晶屏為例,顯卡會(huì)將顯存里的數(shù)據(jù),按照從左至右,從上到下的順序同步到屏幕上的每一個(gè)像素晶體管,一個(gè)像素晶體管就代表了一個(gè)像素。


如果我們的屏幕分辨率是1080x1920像素,就表示有1080x1920個(gè)像素像素晶體管,每個(gè)橡素點(diǎn)的顏色越豐富,描述這個(gè)像素的數(shù)據(jù)就越大,比如單色,每個(gè)像素只需要1bit,16色時(shí),只需要4bit,256色時(shí),就需要一個(gè)字節(jié)。那么1080x1920的分辨率的屏幕下,如果要以256色顯示,顯卡至少需要1080x1920個(gè)字節(jié),也就是2M的大小。

剛剛說了,屏幕上的像素?cái)?shù)據(jù)是從左到右,從上到下進(jìn)行同步的,當(dāng)這個(gè)過程完成了,就表示一幀繪制完成了,于是會(huì)開始下一幀的繪制,大部分的顯示屏都是以60HZ的頻率在屏幕上繪制完一幀,也就是16ms,并且每次繪制新的一幀時(shí),都會(huì)發(fā)出一個(gè)垂直同步信號(VSync)。我們已經(jīng)知道,圖像數(shù)據(jù)都是放在幀緩沖中的,如果幀緩沖的緩沖區(qū)只有一個(gè),那么屏幕在繪制這一幀的時(shí)候,圖像數(shù)據(jù)便沒法放入幀緩沖中了,只能等待這一幀繪制完成,在這種情況下,會(huì)有很大了效率問題。所以為了解決這一問題,幀緩沖引入兩個(gè)緩沖區(qū),即雙緩沖機(jī)制。雙緩沖雖然能解決效率問題,但會(huì)引入一個(gè)新的問題。當(dāng)屏幕這一幀還沒繪制完成時(shí),即屏幕內(nèi)容剛顯示一半時(shí),GPU 將新的一幀內(nèi)容提交到幀緩沖區(qū)并把兩個(gè)緩沖區(qū)進(jìn)行交換后,顯卡的像素同步模塊就會(huì)把新的一幀數(shù)據(jù)的下半段顯示到屏幕上,造成畫面撕裂現(xiàn)象。

為了解決撕裂問題,就需要在收到垂直同步的時(shí)候才將幀緩沖中的兩個(gè)緩沖區(qū)進(jìn)行交換。Android4.1黃油計(jì)劃中有一個(gè)優(yōu)化點(diǎn),就是CPU和GPU都只有收到垂直同步的信號時(shí),才會(huì)開始進(jìn)行圖像的繪制操作,以及緩沖區(qū)的交換工作。

Android圖像顯示原理

我們已經(jīng)了解了屏幕圖像顯示的原理了,那么接著開始對Android圖像顯示的學(xué)習(xí)。

從上一章已經(jīng)知道,計(jì)算機(jī)渲染界面必須要有GPU和幀緩沖。對于Linux系統(tǒng)來說,用戶進(jìn)程是沒法直接操作幀緩沖的,但我們想要顯示圖像就必須要操作幀緩沖,所以Linux系統(tǒng)設(shè)計(jì)了一個(gè)虛擬設(shè)備文件,來作為對幀緩沖的映射,通過對該文件的I/O讀寫,我們就可以實(shí)現(xiàn)讀寫屏操作。幀緩沖對應(yīng)的設(shè)備文件于/dev/fb* ,*表示對多個(gè)顯示設(shè)備的支持, 設(shè)備號從0到31,如/dev/fb0就表示第一塊顯示屏,/dev/fb1就表示第二塊顯示屏。對于Android系統(tǒng)來說,默認(rèn)使用/dev/fb0這一個(gè)設(shè)幀緩沖作為主屏幕,也就是我們的手機(jī)屏幕。我們Android手機(jī)屏幕上顯示的圖像數(shù)據(jù),都是存儲(chǔ)在/dev/fb0里,早期AndroidStuio中的DDMS工具實(shí)現(xiàn)截屏的原理就是直接讀取/dev/fb0設(shè)備文件。

我們知道了手機(jī)屏幕上的圖形數(shù)據(jù)都存儲(chǔ)在幀緩沖中,所以Android手機(jī)圖像界面的原理就是將我們的圖像數(shù)據(jù)寫入到幀緩沖內(nèi)。那么,寫入到幀緩沖的圖像數(shù)據(jù)是怎么生成的,又是怎樣加工的呢?圖形數(shù)據(jù)是怎樣送到幀緩沖去的,中間經(jīng)歷了哪些步驟和過程呢?了解了這幾個(gè)問題,我們就了解了Android圖形渲染的原理,那么帶著這幾個(gè)疑問,接著往下看。

想要知道圖像數(shù)據(jù)是怎么產(chǎn)生的,我們需要知道圖像生產(chǎn)者有哪些,他們分別是如何生成圖像的,想要知道圖像數(shù)據(jù)是怎么被消費(fèi)的,我們需要知道圖像消費(fèi)者有哪些,他們又分別是如何消費(fèi)圖像的,想要知道中間經(jīng)歷的步驟和過程,我們需要知道圖像緩沖區(qū)有哪些,他們是如何被創(chuàng)建,如何分配存儲(chǔ)空間,又是如何將數(shù)據(jù)從生產(chǎn)者傳遞到消費(fèi)者的,圖像顯示是一個(gè)很經(jīng)典的消費(fèi)者生產(chǎn)者的模型,只有對這個(gè)模型各個(gè)模塊的擊破,了解他們之間的流動(dòng)關(guān)系,我們才能找到一條更容易的路徑去掌握Android圖形顯示原理。我們看看谷歌提供的官方的架構(gòu)圖是怎樣描述這一模型的模塊及關(guān)系的。

如圖,圖像的生產(chǎn)者主要有MediaPlayer,CameraPrevier,NDK,OpenGl ES。MediaPlayer和Camera Previer是通過直接讀取圖像源來生成圖像數(shù)據(jù),NDK(Skia),OpenGL ES是通過自身的繪制能力生產(chǎn)的圖像數(shù)據(jù);圖像的消費(fèi)者有SurfaceFlinger,OpenGL ES Apps,以及HAL中的Hardware Composer。OpenGl ES既可以是圖像的生產(chǎn)者,也可以是圖像的消費(fèi)者,所以它也放在了圖像消費(fèi)模塊中;圖像緩沖區(qū)主要有Surface以及前面提到幀緩沖。

Android圖像顯示的原理,會(huì)僅僅圍繞圖像的生產(chǎn)者圖像的消費(fèi)者圖像緩沖區(qū)來展開,在這一篇文章中,我們先看看Android系統(tǒng)中的圖像消費(fèi)者。

圖像消費(fèi)者

SurfaceFlinger

SurfaceFlinger是Android系統(tǒng)中最重要的一個(gè)圖像消費(fèi)者,Activity繪制的界面圖像,都會(huì)傳遞到SurfaceFlinger來,SurfaceFlinger的作用主要是接收圖像緩沖區(qū)數(shù)據(jù),然后交給HWComposer或者OpenGL做合成,合成完成后,SurfaceFlinger會(huì)把最終的數(shù)據(jù)提交給幀緩沖。

那么SurfaceFlinger是如何接收圖像緩沖區(qū)的數(shù)據(jù)的呢?我們需要先了解一下Layer(層)的概念,一個(gè)Layer包含了一個(gè)Surface,一個(gè)Surface對應(yīng)了一塊圖形緩沖區(qū),而一個(gè)界面是由多個(gè)Surface組成的,所以他們會(huì)一一對應(yīng)到SurfaceFlinger的Layer中。SurfaceFlinger通過讀取Layer中的緩沖數(shù)據(jù),就相當(dāng)于讀取界面上Surface的圖像數(shù)據(jù)。Layer本質(zhì)上是Surface和SurfaceControl的組合,Surface是圖形生產(chǎn)者和圖像消費(fèi)之間傳遞數(shù)據(jù)的緩沖區(qū),SurfaceControl是Surface的控制類。

前面在屏幕圖像顯示原理中講到,為了防止圖像的撕裂,Android系統(tǒng)會(huì)在收到VSync垂直同步時(shí)才會(huì)開始處理圖像的繪制和合成工作,而Surfaceflinger作為一個(gè)圖像的消費(fèi)者,同樣也是遵守這一規(guī)則,所以我們通過源碼來看看SurfaceFlinger是如何在這一規(guī)則下,消費(fèi)圖像數(shù)據(jù)的。

接收VSync信號

SurfaceFlinger專門創(chuàng)建了一個(gè)EventThread線程用來接收VSync。EventThread通過Socket將VSync信號同步到EventQueue中,而EventQueue又通過回調(diào)的方式,將VSync信號同步到SurfaceFlinger內(nèi)。我們看一下源碼實(shí)現(xiàn)。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::init() {
    { 
        //……
       
        sp<VSyncSource> vsyncSrc = new DispSyncSource(&mPrimaryDispSync,
                vsyncPhaseOffsetNs, true, "app");
        //創(chuàng)建App的VSYNC信號接收線程
        mEventThread = new EventThread(vsyncSrc, *this, false);
        sp<VSyncSource> sfVsyncSrc = new DispSyncSource(&mPrimaryDispSync,
                sfVsyncPhaseOffsetNs, true, "sf");
         //創(chuàng)建SurfaceFlinger的VSYNC信號接收線程
        mSFEventThread = new EventThread(sfVsyncSrc, *this, true);
        mEventQueue.setEventThread(mSFEventThread);
        //……
    }
    //……
}

//文件-->/frameworks/native/services/surfaceflinger/MessageQueue.cpp
void MessageQueue::setEventThread(const sp<EventThread>& eventThread)
{
    mEventThread = eventThread;
    //創(chuàng)建連接
    mEvents = eventThread->createEventConnection();
    //獲取EventThread的通信接口
    mEvents->stealReceiveChannel(&mEventTube);
    //監(jiān)聽EventThread,有數(shù)據(jù)則調(diào)用cb_eventReceiver
    mLooper->addFd(mEventTube.getFd(), 0, Looper::EVENT_INPUT,
            MessageQueue::cb_eventReceiver, this);
}

//文件-->/frameworks/native/services/surfaceflinger/EventThread.cpp
sp<EventThread::Connection> EventThread::createEventConnection() const {
    return new Connection(const_cast<EventThread*>(this));
}


EventThread::Connection::Connection(const sp<EventThread>& eventThread)
    : count(-1), mEventThread(eventThread), mChannel(gui::BitTube::DefaultSize) //創(chuàng)建BitTube通信信道
{
}

void EventThread::Connection::onFirstRef() {
    // NOTE: mEventThread doesn't hold a strong reference on us
    mEventThread->registerDisplayEventConnection(this);
}


status_t EventThread::registerDisplayEventConnection(
        const sp<EventThread::Connection>& connection) {
    //將Connection添加到mDisplayEventConnections中
    mDisplayEventConnections.add(connection);
    return NO_ERROR;
}

上面主要是SurfaceFlinger初始化接收VSYNC垂直同步信號的操作,主要有這幾個(gè)過程:

  1. init函數(shù)中,創(chuàng)建了EventThread線程來專門接收VSYNC垂直同步的信號,并調(diào)用setEventThread函數(shù)將EventThread添加到EventQueue中。
  2. setEventThread函數(shù)中,創(chuàng)建EventThread的連接,并將EventThread線程中通信的套接字通過addFd添加到MessageQueue的Looper中,這樣MessageQueue就能接收EventThrea的數(shù)據(jù)了。
  3. EventThread執(zhí)行Connection函數(shù),函數(shù)的構(gòu)造方法會(huì)創(chuàng)建BitTube,BitTube實(shí)際是一個(gè)socket,所以EventThread和MessageQueue是通過socket通信,我們可以看看BitTube的實(shí)現(xiàn)源碼。
//文件-->/frameworks/native/libs/gui/BitTube.cpp
BitTube::BitTube(size_t bufsize) {
    // 創(chuàng)建socket pair,用于發(fā)送事件
    init(bufsize, bufsize);
}
void BitTube::init(size_t rcvbuf, size_t sndbuf) {
    int sockets[2];
    if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets) == 0) {
        size_t size = DEFAULT_SOCKET_BUFFER_SIZE;
        // 設(shè)置socket buffer
        setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf));
        setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf));
        // since we don't use the "return channel", we keep it small...
        setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &size, sizeof(size));
        setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &size, sizeof(size));
        fcntl(sockets[0], F_SETFL, O_NONBLOCK);
        fcntl(sockets[1], F_SETFL, O_NONBLOCK);
        // socket[0]用于接收端,最終通過Binder IPC返回給客戶端應(yīng)用
        mReceiveFd.reset(sockets[0]);
        // socket[1]用于發(fā)送端
        mSendFd.reset(sockets[1]);
    } else {
        mReceiveFd.reset();
        ALOGE("BitTube: pipe creation failed (%s)", strerror(errno));
    }
}
  1. Connection方法中const_cast智能指針會(huì)調(diào)用onFirstRef函數(shù),將當(dāng)前的connection添加到mDisplayEventConnections中,mDisplayEventConnections又是什么?它其實(shí)一個(gè)專門用來保存接收VSYNC的Connection的容器,除了我們的SurfaceFlinger用來接收VSYNC的EventThread,還會(huì)有其他EventThread來接收VSync,如客戶端的EventThread,也都是保存在mDisplayEventConnections中。

經(jīng)過上面幾個(gè)步驟,我們接收VSync的初始化工作都準(zhǔn)備好了,EventThread也開始運(yùn)轉(zhuǎn)了,接著看一下EventThread的運(yùn)轉(zhuǎn)函數(shù)threadLoop做的事情。

//文件-->/frameworks/native/services/surfaceflinger/EventThread.cpp
bool EventThread::threadLoop() {
    DisplayEventReceiver::Event event;
    Vector< sp<EventThread::Connection> > signalConnections;
    signalConnections = waitForEvent(&event);

    // 將事件分發(fā)給想要接收VSync的Connction
    const size_t count = signalConnections.size();
    for (size_t i=0 ; i<count ; i++) {
        const sp<Connection>& conn(signalConnections[i]);
        status_t err = conn->postEvent(event);
        if (err == -EAGAIN || err == -EWOULDBLOCK) {          
            ALOGW("EventThread: dropping event (%08x) for connection %p",
                    event.header.type, conn.get());
        } else if (err < 0) {
            removeDisplayEventConnection(signalConnections[i]);
        }
    }
    return true;
}

Vector< sp<EventThread::Connection> > EventThread::waitForEvent(
        DisplayEventReceiver::Event* event)
{
    Mutex::Autolock _l(mLock);
    Vector< sp<EventThread::Connection> > signalConnections;

    do {
        //……
        // 查找等待接收事件的connection
        size_t count = mDisplayEventConnections.size();
        for (size_t i=0 ; i<count ; i++) {
            sp<Connection> connection(mDisplayEventConnections[i].promote());
            if (connection != NULL) {
                bool added = false;
                //connection的數(shù)量要大于0
                if (connection->count >= 0) {                 
                    waitForVSync = true;
                    if (timestamp) {                      
                        if (connection->count == 0) {
                            connection->count = -1;
                            signalConnections.add(connection);
                            added = true;
                        } else if (connection->count == 1 ||
                                (vsyncCount % connection->count) == 0) {
                            signalConnections.add(connection);
                            added = true;
                        }
                    }
                }

                if (eventPending && !timestamp && !added) {                  
                    signalConnections.add(connection);
                }
            } else {            
                mDisplayEventConnections.removeAt(i);
                --i; --count;
            }
        }

        //……
        
        if (!timestamp && !eventPending) {
            if (waitForVSync) {
                
                bool softwareSync = mUseSoftwareVSync;
                nsecs_t timeout = softwareSync ? ms2ns(16) : ms2ns(1000);
                //接收VSync信號
                if (mCondition.waitRelative(mLock, timeout) == TIMED_OUT) {
                    if (!softwareSync) {
                        ALOGW("Timed out waiting for hw vsync; faking it");
                    }
                    //接收軟件產(chǎn)生的VSync
                    mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
                    mVSyncEvent[0].header.id = DisplayDevice::DISPLAY_PRIMARY;
                    mVSyncEvent[0].header.timestamp = systemTime(SYSTEM_TIME_MONOTONIC);
                    mVSyncEvent[0].vsync.count++;
                }
            } else {
                //無接收VSync的connection,則進(jìn)入休眠
                mCondition.wait(mLock);
            }
        }
    } while (signalConnections.isEmpty());
    return signalConnections;
}

threadLoop主要是兩件事情

  1. 通過mCondition.waitRelative來接收VSync

  2. 將VSync分發(fā)給所有的Connection

mConditon又是怎么接收VSync的呢?我們來看一下

//文件-->/frameworks/native/services/surfaceflinger/EventThread.cpp
void EventThread::onVSyncEvent(nsecs_t timestamp) {
    Mutex::Autolock _l(mLock);
    mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
    mVSyncEvent[0].header.id = 0;
    mVSyncEvent[0].header.timestamp = timestamp;
    mVSyncEvent[0].vsync.count++;
    mCondition.broadcast();
}

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp/DispSyncSource
virtual void onDispSyncEvent(nsecs_t when) {
    sp<VSyncSource::Callback> callback;
    {
        Mutex::Autolock lock(mCallbackMutex);
        callback = mCallback;

        if (mTraceVsync) {
            mValue = (mValue + 1) % 2;
            ATRACE_INT(mVsyncEventLabel.string(), mValue);
        }
    }

    if (callback != NULL) {
        //回調(diào)onVSyncEvent
        callback->onVSyncEvent(when);
    }
}


可以看到,mCondition的VSync信號實(shí)際是DispSyncSource通過onVSyncEvent回調(diào)傳入的,但是DispSyncSource的VSync又是怎么接收的呢?在上面講到的SurfaceFlinger的init函數(shù),在創(chuàng)建EventThread的實(shí)現(xiàn)中,我們可以發(fā)現(xiàn)答案——mPrimaryDispSync

sp<VSyncSource> sfVsyncSrc = new DispSyncSource(&mPrimaryDispSync,
                sfVsyncPhaseOffsetNs, true, "sf");
//創(chuàng)建SurfaceFlinger的VSYNC信號接收線程
mSFEventThread = new EventThread(sfVsyncSrc, *this, true);
mEventQueue.setEventThread(mSFEventThread);

DispSyncSource的構(gòu)造方法傳入了mPrimaryDispSync,mPrimaryDispSync實(shí)際是一個(gè)DispSyncThread線程,我們看看這個(gè)線程的threadLoop方法

//文件-->/frameworks/native/services/surfaceflinger/DispSync.cpp
virtual bool threadLoop() {
    status_t err;
    nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
    while (true) {
        Vector<CallbackInvocation> callbackInvocations;
        nsecs_t targetTime = 0;

        { 
            //……
            //判斷是否阻塞
            if (mPeriod == 0) {
                err = mCond.wait(mMutex);
                if (err != NO_ERROR) {
                    ALOGE("error waiting for new events: %s (%d)",
                          strerror(-err), err);
                    return false;
                }
                continue;
            }
            targetTime = computeNextEventTimeLocked(now);

            bool isWakeup = false;

            if (now < targetTime) {
                if (kTraceDetailedInfo) ATRACE_NAME("DispSync waiting");

                if (targetTime == INT64_MAX) {
                    ALOGV("[%s] Waiting forever", mName);
                    err = mCond.wait(mMutex);
                } else {
                    ALOGV("[%s] Waiting until %" PRId64, mName,
                          ns2us(targetTime));
                    err = mCond.waitRelative(mMutex, targetTime - now);
                }

                if (err == TIMED_OUT) {
                    isWakeup = true;
                } else if (err != NO_ERROR) {
                    ALOGE("error waiting for next event: %s (%d)",
                          strerror(-err), err);
                    return false;
                }
            }

            now = systemTime(SYSTEM_TIME_MONOTONIC);

            // Don't correct by more than 1.5 ms
            static const nsecs_t kMaxWakeupLatency = us2ns(1500);

            if (isWakeup) {
                mWakeupLatency = ((mWakeupLatency * 63) +
                                  (now - targetTime)) / 64;
                mWakeupLatency = min(mWakeupLatency, kMaxWakeupLatency);
                if (kTraceDetailedInfo) {
                    ATRACE_INT64("DispSync:WakeupLat", now - targetTime);
                    ATRACE_INT64("DispSync:AvgWakeupLat", mWakeupLatency);
                }
            }

            callbackInvocations = gatherCallbackInvocationsLocked(now);
        }

        if (callbackInvocations.size() > 0) {
            //回調(diào)VSync給DispSyncSource
            fireCallbackInvocations(callbackInvocations);
        }
    }

    return false;
}

void fireCallbackInvocations(const Vector<CallbackInvocation>& callbacks) {
    if (kTraceDetailedInfo) ATRACE_CALL();
    for (size_t i = 0; i < callbacks.size(); i++) {
        callbacks[i].mCallback->onDispSyncEvent(callbacks[i].mEventTime);
    }
}

DispSyncThread的threadLoop會(huì)通過mPeriod來判斷是否進(jìn)行阻塞或者進(jìn)行VSync回調(diào),那么mPeriod又是哪兒被設(shè)置的呢?這里又回到SurfaceFlinger了,我們可以發(fā)現(xiàn)在SurfaceFlinger的resyncToHardwareVsync函數(shù)中有對mPeriod的賦值。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::resyncToHardwareVsync(bool makeAvailable) {
    Mutex::Autolock _l(mHWVsyncLock);

    if (makeAvailable) {
        mHWVsyncAvailable = true;
    } else if (!mHWVsyncAvailable) {
        // Hardware vsync is not currently available, so abort the resync
        // attempt for now
        return;
    }

    const auto& activeConfig = mHwc->getActiveConfig(HWC_DISPLAY_PRIMARY);
    const nsecs_t period = activeConfig->getVsyncPeriod();

    mPrimaryDispSync.reset();
    //設(shè)置DispSyncThread的period
    mPrimaryDispSync.setPeriod(period);
    //……
}

可以看到,這里最終通過HWComposer,也就是硬件層拿到了period。終于追蹤到了VSync的最終來源了,它從HWCompser產(chǎn)生,回調(diào)至DispSync線程,然后DispSync線程回調(diào)到DispSyncSource,DispSyncSource又回調(diào)到EventThread,EventThread再通過Socket分發(fā)到MessageQueue中

我們已經(jīng)知道了VSync信號來自于HWCompser,但SurfaceFlinger并不會(huì)一直監(jiān)聽VSync信號,監(jiān)聽VSync的線程大部分時(shí)間都是休眠狀態(tài),只有需要做合成工作時(shí),才會(huì)監(jiān)聽VSync,這樣即保證圖像合成的操作能和VSync保持一致,也節(jié)省了性能。SurfaceFlinger提供了一些主動(dòng)注冊監(jiān)聽VSync的操作函數(shù)。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::signalTransaction() {
    mEventQueue.invalidate();
}

void SurfaceFlinger::signalLayerUpdate() {
    mEventQueue.invalidate();
}

void MessageQueue::invalidate() {
    mEvents->requestNextVsync();
}

void EventThread::requestNextVsync(
        const sp<EventThread::Connection>& connection) {
    Mutex::Autolock _l(mLock);

    mFlinger.resyncWithRateLimit();

    if (connection->count < 0) {
        connection->count = 0;
        mCondition.broadcast();
    }
}

void SurfaceFlinger::resyncWithRateLimit() {
    static constexpr nsecs_t kIgnoreDelay = ms2ns(500);

    static nsecs_t sLastResyncAttempted = 0;
    const nsecs_t now = systemTime();
    if (now - sLastResyncAttempted > kIgnoreDelay) {
        //注冊VSync
        resyncToHardwareVsync(false);
    }
    sLastResyncAttempted = now;
}

可以看到,只有當(dāng)SurfaceFlinger調(diào)用signalTransaction或者signalLayerUpdate函數(shù)時(shí),才會(huì)注冊監(jiān)聽VSync信號。那么signalTransaction或者signalLayerUpdate什么時(shí)候被調(diào)用呢?它可以由圖像的生產(chǎn)者通知調(diào)用,也可以由SurfaceFlinger根據(jù)自己的邏輯來判斷是否調(diào)用。

現(xiàn)在假設(shè)App層已經(jīng)生成了我們界面的圖像數(shù)據(jù),并調(diào)用了signalTransaction通知SurfaceFlinger注冊監(jiān)聽VSync,于是VSync信號便會(huì)傳遞到了MessageQueue中了,我們接著看看MessageQueue又是怎么處理VSync的吧。

//文件-->/frameworks/native/services/surfaceflinger/MessageQueue.cpp
int MessageQueue::eventReceiver(int /*fd*/, int /*events*/) {
    ssize_t n;
    DisplayEventReceiver::Event buffer[8];
    while ((n = DisplayEventReceiver::getEvents(&mEventTube, buffer, 8)) > 0) {
        for (int i=0 ; i<n ; i++) {
            if (buffer[i].header.type == DisplayEventReceiver::DISPLAY_EVENT_VSYNC) {
                mHandler->dispatchInvalidate();
                break;
            }
        }
    }
    return 1;
}

void MessageQueue::Handler::dispatchInvalidate() {
    if ((android_atomic_or(eventMaskInvalidate, &mEventMask) & eventMaskInvalidate) == 0) {
        mQueue.mLooper->sendMessage(this, Message(MessageQueue::INVALIDATE));
    }
}

void MessageQueue::Handler::handleMessage(const Message& message) {
    switch (message.what) {
        case INVALIDATE:
            android_atomic_and(~eventMaskInvalidate, &mEventMask);
            mQueue.mFlinger->onMessageReceived(message.what);
            break;
        case REFRESH:
            android_atomic_and(~eventMaskRefresh, &mEventMask);
            mQueue.mFlinger->onMessageReceived(message.what);
            break;
    }
}

MessageQueue收到VSync信號后,最終回調(diào)到了SurfaceFlinger的onMessageReceived中,當(dāng)SurfaceFlinger接收到VSync后,便開始以一個(gè)圖像消費(fèi)者的角色來處理圖像數(shù)據(jù)了。我們接著看SurfaceFlinger是以什么樣的方式消費(fèi)圖像數(shù)據(jù)的。

處理VSync信號

VSync信號最終被SurfaceFlinger的onMessageReceived函數(shù)中的INVALIDATE模塊處理。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::onMessageReceived(int32_t what) {
    ATRACE_CALL();
    switch (what) {
        case MessageQueue::INVALIDATE: {
            bool frameMissed = !mHadClientComposition &&
                    mPreviousPresentFence != Fence::NO_FENCE &&
                    (mPreviousPresentFence->getSignalTime() ==
                            Fence::SIGNAL_TIME_PENDING);
            ATRACE_INT("FrameMissed", static_cast<int>(frameMissed));
            if (mPropagateBackpressure && frameMissed) {
                ALOGD("Backpressure trigger, skipping transaction & refresh!");
                //如果掉幀則請求下一次VSync,跳過這一次請求
                signalLayerUpdate();
                break;
            }

            //更新VR模式的Flinger
            updateVrFlinger();

            bool refreshNeeded = handleMessageTransaction();
            refreshNeeded |= handleMessageInvalidate();
            refreshNeeded |= mRepaintEverything;
            if (refreshNeeded) {
                //判斷是否要做刷新
                signalRefresh();
            }
            break;
        }
        case MessageQueue::REFRESH: {
            handleMessageRefresh();
            break;
        }
    }
}

INVALIDATE的流程如下:

  1. 判斷是否掉幀,如果掉幀則執(zhí)行signalLayerUpdate函數(shù),這個(gè)函數(shù)在上面提到過,它的作用是請求下一個(gè)VSync信號,然后跳過這一次的處理

  2. 更新VR模式的Flinger

  3. 通過handleMessageTransaction函數(shù)來判斷是否要處理這一次的Vsync,我們看一下handleMessageTransaction是如何判斷的

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
bool SurfaceFlinger::handleMessageTransaction() {
    uint32_t transactionFlags = peekTransactionFlags();
    if (transactionFlags) {
        handleTransaction(transactionFlags);
        return true;
    }
    return false;
}

void SurfaceFlinger::handleTransaction(uint32_t transactionFlags)
{
    //……
    transactionFlags = getTransactionFlags(eTransactionMask);
    handleTransactionLocked(transactionFlags);
    //……
}

void SurfaceFlinger::handleTransactionLocked(uint32_t transactionFlags)
{
    // 通知所有的Layer可以做合成了
    mCurrentState.traverseInZOrder([](Layer* layer) {
        layer->notifyAvailableFrames();
    });

    //遍歷每一個(gè)Layer的doTransaction方法,處理可視區(qū)域
    if (transactionFlags & eTraversalNeeded) {
        mCurrentState.traverseInZOrder([&](Layer* layer) {
            uint32_t trFlags = layer->getTransactionFlags(eTransactionNeeded);
            if (!trFlags) return;

            const uint32_t flags = layer->doTransaction(0);
            if (flags & Layer::eVisibleRegion)
                mVisibleRegionsDirty = true;
        });
    }

    //處理每個(gè)顯示設(shè)備(屏幕)的變化
    if (transactionFlags & eDisplayTransactionNeeded) {
        const KeyedVector<  wp<IBinder>, DisplayDeviceState>& curr(mCurrentState.displays);
        const KeyedVector<  wp<IBinder>, DisplayDeviceState>& draw(mDrawingState.displays);
        if (!curr.isIdenticalTo(draw)) {
            mVisibleRegionsDirty = true;
            const size_t cc = curr.size();
                  size_t dc = draw.size();

            // 尋找被移除的屏幕以及處理發(fā)了改變的屏幕
            for (size_t i=0 ; i<dc ; i++) {
                const ssize_t j = curr.indexOfKey(draw.keyAt(i));
                if (j < 0) {
                    // 如果curr找不到這個(gè)顯示設(shè)備,表示已經(jīng)移除,會(huì)執(zhí)行斷開連接的操作
                    if (!draw[i].isMainDisplay()) {
                        // Call makeCurrent() on the primary display so we can
                        // be sure that nothing associated with this display
                        // is current.
                        const sp<const DisplayDevice> defaultDisplay(getDefaultDisplayDeviceLocked());
                        defaultDisplay->makeCurrent(mEGLDisplay, mEGLContext);
                        sp<DisplayDevice> hw(getDisplayDeviceLocked(draw.keyAt(i)));
                        if (hw != NULL)
                            hw->disconnect(getHwComposer());
                        if (draw[i].type < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES)
                            mEventThread->onHotplugReceived(draw[i].type, false);
                        mDisplays.removeItem(draw.keyAt(i));
                    } else {
                        ALOGW("trying to remove the main display");
                    }
                } else {
                    // 判斷設(shè)備顯示是否發(fā)生改變
                    const DisplayDeviceState& state(curr[j]);
                    const wp<IBinder>& display(curr.keyAt(j));
                    const sp<IBinder> state_binder = IInterface::asBinder(state.surface);
                    const sp<IBinder> draw_binder = IInterface::asBinder(draw[i].surface);
                    if (state_binder != draw_binder) {
                        // 如果state_binder和draw_binder不一致,表示Surface可能銷毀了,這里會(huì)做移除處理
                        sp<DisplayDevice> hw(getDisplayDeviceLocked(display));
                        if (hw != NULL)
                            hw->disconnect(getHwComposer());
                        mDisplays.removeItem(display);
                        mDrawingState.displays.removeItemsAt(i);
                        dc--; i--;
                        continue;
                    }

                    const sp<DisplayDevice> disp(getDisplayDeviceLocked(display));
                    if (disp != NULL) {
                        if (state.layerStack != draw[i].layerStack) {
                            disp->setLayerStack(state.layerStack);
                        }
                        if ((state.orientation != draw[i].orientation)
                                || (state.viewport != draw[i].viewport)
                                || (state.frame != draw[i].frame))
                        {
                            disp->setProjection(state.orientation,
                                    state.viewport, state.frame);
                        }
                        if (state.width != draw[i].width || state.height != draw[i].height) {
                            disp->setDisplaySize(state.width, state.height);
                        }
                    }
                }
            }

            // 尋找增加的顯示設(shè)備
            for (size_t i=0 ; i<cc ; i++) {
                if (draw.indexOfKey(curr.keyAt(i)) < 0) {
                    const DisplayDeviceState& state(curr[i]);

                    sp<DisplaySurface> dispSurface;
                    sp<IGraphicBufferProducer> producer;
                    sp<IGraphicBufferProducer> bqProducer;
                    sp<IGraphicBufferConsumer> bqConsumer;
                    BufferQueue::createBufferQueue(&bqProducer, &bqConsumer);

                    int32_t hwcId = -1;
                    if (state.isVirtualDisplay()) {                
                        if (state.surface != NULL) {
                            // VR設(shè)備的處理
                            //……                        
                            sp<VirtualDisplaySurface> vds =
                                    new VirtualDisplaySurface(*mHwc,
                                            hwcId, state.surface, bqProducer,
                                            bqConsumer, state.displayName);

                            dispSurface = vds;
                            producer = vds;
                        }
                    } else {                     
                        hwcId = state.type;
                        dispSurface = new FramebufferSurface(*mHwc, hwcId, bqConsumer);
                        producer = bqProducer;
                    }

                    const wp<IBinder>& display(curr.keyAt(i));
                    if (dispSurface != NULL) {
                        sp<DisplayDevice> hw =
                                new DisplayDevice(this, state.type, hwcId, state.isSecure, display,
                                                  dispSurface, producer,
                                                  mRenderEngine->getEGLConfig(),
                                                  hasWideColorDisplay);
                        hw->setLayerStack(state.layerStack);
                        hw->setProjection(state.orientation,
                                state.viewport, state.frame);
                        hw->setDisplayName(state.displayName);
                        mDisplays.add(display, hw);
                        if (!state.isVirtualDisplay()) {
                            mEventThread->onHotplugReceived(state.type, true);
                        }
                    }
                }
            }
        }
    }

    //設(shè)置Layer的TransformHint,主要是設(shè)備方向
    if (transactionFlags & (eTraversalNeeded|eDisplayTransactionNeeded)) {
        sp<const DisplayDevice> disp;
        uint32_t currentlayerStack = 0;
        bool first = true;
        mCurrentState.traverseInZOrder([&](Layer* layer) {
            uint32_t layerStack = layer->getLayerStack();
            if (first || currentlayerStack != layerStack) {
                currentlayerStack = layerStack;               
                disp.clear();
                for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
                    sp<const DisplayDevice> hw(mDisplays[dpy]);
                    if (hw->getLayerStack() == currentlayerStack) {
                        if (disp == NULL) {
                            disp = hw;
                        } else {
                            disp = NULL;
                            break;
                        }
                    }
                }
            }
            if (disp == NULL) {              
                disp = getDefaultDisplayDeviceLocked();
            }
            layer->updateTransformHint(disp);

            first = false;
        });
    }

    //有新的Layer增加,標(biāo)記為臟區(qū)
    if (mLayersAdded) {
        mLayersAdded = false;
        // Layers have been added.
        mVisibleRegionsDirty = true;
    }

    //有Layer移除,重新檢查可視區(qū)域,更新LayerStack
    if (mLayersRemoved) {
        mLayersRemoved = false;
        mVisibleRegionsDirty = true;
        mDrawingState.traverseInZOrder([&](Layer* layer) {
            if (mLayersPendingRemoval.indexOf(layer) >= 0) {
                // this layer is not visible anymore
                // TODO: we could traverse the tree from front to back and
                //       compute the actual visible region
                // TODO: we could cache the transformed region
                Region visibleReg;
                visibleReg.set(layer->computeScreenBounds());
                invalidateLayerStack(layer->getLayerStack(), visibleReg);
            }
        });
    }

    commitTransaction();

    updateCursorAsync();
}

handleMessageTransaction的處理比較長,處理的事情也比較多,它主要做的事情有這些

  • 遍歷Layer執(zhí)行doTransaction,處理可視區(qū) ,它的實(shí)現(xiàn)如下
//文件-->/frameworks/native/services/surfaceflinger/Layer.cpp
uint32_t Layer::doTransaction(uint32_t flags) {
    pushPendingState();
    Layer::State c = getCurrentState();
    if (!applyPendingStates(&c)) {
        return 0;
    }

    const Layer::State& s(getDrawingState());

    const bool sizeChanged = (c.requested.w != s.requested.w) ||
                             (c.requested.h != s.requested.h);

    if (sizeChanged) {
        // 尺寸發(fā)生改變,重新調(diào)整圖像緩沖區(qū)的大小
        mSurfaceFlingerConsumer->setDefaultBufferSize(
                c.requested.w, c.requested.h);
    }

    //……
    if (!(flags & eDontUpdateGeometryState)) {
        Layer::State& editCurrentState(getCurrentState());
        if (mFreezeGeometryUpdates) {
            float tx = c.active.transform.tx();
            float ty = c.active.transform.ty();
            c.active = c.requested;
            c.active.transform.set(tx, ty);
            editCurrentState.active = c.active;
        } else {
            editCurrentState.active = editCurrentState.requested;
            c.active = c.requested;
        }
    }

    if (s.active != c.active) {
        // 重新計(jì)算可見區(qū)域
        flags |= Layer::eVisibleRegion;
    }

    if (c.sequence != s.sequence) {
        // 重新計(jì)算可見區(qū)域
        flags |= eVisibleRegion;
        this->contentDirty = true;

        // we may use linear filtering, if the matrix scales us
        const uint8_t type = c.active.transform.getType();
        mNeedsFiltering = (!c.active.transform.preserveRects() ||
                (type >= Transform::SCALE));
    }
    //……
    commitTransaction(c);
    return flags;
}
  • 檢查屏幕的變化

  • 設(shè)置Layer的TransformHint

  • 檢查Layer的變化

  • 提交transaction

  1. 繼續(xù)回到INVALIDATE的處理函數(shù),它做的最后一件事情就是執(zhí)行signalRefresh函數(shù),signalRefresh最終會(huì)調(diào)用SurfaceFlinger的handleMessageRefresh()函數(shù)。
//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::signalRefresh() {
    mEventQueue.refresh();
}

void SurfaceFlinger::onMessageReceived(int32_t what) {
    ATRACE_CALL();
    switch (what) {
        case MessageQueue::INVALIDATE: {
            //……
        }
        case MessageQueue::REFRESH: {
            handleMessageRefresh();
            break;
        }
    }
}

void SurfaceFlinger::handleMessageRefresh() {
    ATRACE_CALL();

    nsecs_t refreshStartTime = systemTime(SYSTEM_TIME_MONOTONIC);

    preComposition(refreshStartTime);
    //合成前預(yù)處理
    rebuildLayerStacks();
    //創(chuàng)建HWC硬件合成的任務(wù)列表
    setUpHWComposer();
    //調(diào)試模式的幀率顯示
    doDebugFlashRegions();
    //圖層混合
    doComposition();
    //合成完畢后的處理工作
    postComposition(refreshStartTime);

    mPreviousPresentFence = mHwc->getPresentFence(HWC_DISPLAY_PRIMARY);

    mHadClientComposition = false;
    for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
        const sp<DisplayDevice>& displayDevice = mDisplays[displayId];
        mHadClientComposition = mHadClientComposition ||
                mHwc->hasClientComposition(displayDevice->getHwcDisplayId());
    }

    mLayersWithQueuedFrames.clear();
}

handleMessageRefresh函數(shù),便是SurfaceFlinger真正處理圖層合成的地方,它主要下面五個(gè)步驟。

  1. preCompostion:合成前預(yù)處理

  2. rebuildLayerStacks:重新構(gòu)建Layer棧

  3. setUpHWCompser:構(gòu)建硬件合成的任務(wù)列表

  4. doCompostion:圖層混合

  5. postCompsition:合成完畢后的處理工作

我會(huì)詳細(xì)介紹每一個(gè)步驟的具體操作

合成前預(yù)處理

合成前預(yù)處理會(huì)判斷Layer是否發(fā)生變化,當(dāng)Layer中有新的待處理的Buffer幀(mQueuedFrames>0),或者mSidebandStreamChanged發(fā)生了變化, 都表示Layer發(fā)生了變化,如果變化了,就調(diào)用signalLayerUpdate,注冊下一次的VSync信號。如果Layer沒有發(fā)生變化,便只會(huì)做這一次的合成工作,不會(huì)注冊下一次VSync了。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::preComposition(nsecs_t refreshStartTime)
{
    bool needExtraInvalidate = false;
    mDrawingState.traverseInZOrder([&](Layer* layer) {
        if (layer->onPreComposition(refreshStartTime)) {
            needExtraInvalidate = true;
        }
    });

    if (needExtraInvalidate) {
        signalLayerUpdate();
    }
}

bool Layer::onPreComposition() {
    mRefreshPending = false;
    return mQueuedFrames > 0 || mSidebandStreamChanged;
}

重建Layer棧

重建Layer棧會(huì)遍歷Layer,計(jì)算和存儲(chǔ)每個(gè)Layer的臟區(qū), 然后和當(dāng)前的顯示設(shè)備進(jìn)行比較,看Layer的臟區(qū)域是否在顯示設(shè)備的顯示區(qū)域內(nèi),如果在顯示區(qū)域內(nèi)的話說明該layer是需要繪制的,則更新到顯示設(shè)備的VisibleLayersSortedByZ列表中,等待被合成

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::rebuildLayerStacks() {
    // mVisibleRegionsDirty表示Visible相關(guān)的顯示區(qū)域是否發(fā)生變化,如果發(fā)生變化則需要重新構(gòu)造LayerStack
    if (CC_UNLIKELY(mVisibleRegionsDirty)) {
        ATRACE_CALL();
        mVisibleRegionsDirty = false;
        invalidateHwcGeometry();
        //遍歷所有顯示設(shè)備,計(jì)算顯示設(shè)備中dirtyRegion(變化區(qū)域)和opaqueRegion(半透明區(qū)域)
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            Region opaqueRegion;
            Region dirtyRegion;
            Vector<sp<Layer>> layersSortedByZ;
            const sp<DisplayDevice>& displayDevice(mDisplays[dpy]);
            const Transform& tr(displayDevice->getTransform());
            const Rect bounds(displayDevice->getBounds());
            if (displayDevice->isDisplayOn()) {
                //計(jì)算Layer的變化區(qū)域和非透明區(qū)域
                computeVisibleRegions(
                        displayDevice->getLayerStack(), dirtyRegion,
                        opaqueRegion);
                //以Z軸順序遍歷Layer,計(jì)算是否需要被合成
                mDrawingState.traverseInZOrder([&](Layer* layer) {
                    if (layer->getLayerStack() == displayDevice->getLayerStack()) {
                        Region drawRegion(tr.transform(
                                layer->visibleNonTransparentRegion));
                        drawRegion.andSelf(bounds);
                        if (!drawRegion.isEmpty()) {
                            layersSortedByZ.add(layer);
                        } else {
                            // Clear out the HWC layer if this layer was
                            // previously visible, but no longer is
                            layer->setHwcLayer(displayDevice->getHwcDisplayId(),
                                    nullptr);
                        }
                    } else {
                        // WM changes displayDevice->layerStack upon sleep/awake.
                        // Here we make sure we delete the HWC layers even if
                        // WM changed their layer stack.
                        layer->setHwcLayer(displayDevice->getHwcDisplayId(),
                                nullptr);
                    }
                });
            }
            displayDevice->setVisibleLayersSortedByZ(layersSortedByZ);
            displayDevice->undefinedRegion.set(bounds);
            displayDevice->undefinedRegion.subtractSelf(
                    tr.transform(opaqueRegion));
            displayDevice->dirtyRegion.orSelf(dirtyRegion);
        }
    }
}

rebuildLayerStacks中最重要的一步是computeVisibleRegions,也就是對Layer的變化區(qū)域和非透明區(qū)域的計(jì)算,為什么要對變化區(qū)域做計(jì)算呢?我們先看看SurfaceFlinger對界面顯示區(qū)域的分類:

  • opaqueRegion:非透明區(qū)域,表示界面上不完全透明的區(qū)域
  • visibleRegion:可見區(qū)域,表示完全不透明的區(qū)域,被不完全透明區(qū)域遮擋的區(qū)域依然是完全透明區(qū)域
  • coveredRegion: 被遮蓋區(qū)域,被完全不透明的區(qū)域覆蓋的區(qū)域
  • transparentRegion: 完全透明的區(qū)域,一般從合成列表中移除,因?yàn)橥耆该鞯腖ayer沒必要做任何合成操作
  • aboveOpaqueLayers : 當(dāng)前Layer上層所有Layer不透明區(qū)域的累加
  • aboveCoveredLayers : 當(dāng)前Layer上層所有Layer可見區(qū)域的累加

還是以這張圖做例子,可以看到我們的狀態(tài)欄是半透明的,所以它是一個(gè)opaqueRegion區(qū)域,微信界面和虛擬按鍵是完全不透明的,他是一個(gè)visibleRegion,除了這三個(gè)Layer外,還有一個(gè)我們看不到的Layer——壁紙,它被上方visibleRegion遮擋了,所以是coveredRegion


對這幾個(gè)區(qū)域的概念清楚了,我們就可以去了解computeVisibleRegions中做的事情了,它主要是這幾步操作:

  1. 從Layer Z軸最上層開始遍歷該顯示設(shè)備中所有的Layer
  2. 計(jì)算該Layer的被覆蓋區(qū)域,aboveCoveredLayers和當(dāng)前Layer除去非透明區(qū)域部分的交集
  3. 將當(dāng)前Layer除去非透明區(qū)域部分添加到aboveCoveredLayers中,為下一層Layer計(jì)算提供條件
  4. 計(jì)算當(dāng)前Layer的可見區(qū)域 , 即當(dāng)前Layer除去非透明區(qū)域部分減掉aboveOpaqueLayers
  5. 計(jì)算臟區(qū)域,臟區(qū)域需要減去被遮蓋的部分,并將當(dāng)前Layer的臟區(qū)域累加到當(dāng)前顯示設(shè)備的臟區(qū)域outDirtyRegion
  6. 保存計(jì)算的Region到Layer中

我們看一下這個(gè)函數(shù)具體的代碼實(shí)現(xiàn)

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::computeVisibleRegions(uint32_t layerStack,
        Region& outDirtyRegion, Region& outOpaqueRegion)
{
    ATRACE_CALL();
    ALOGV("computeVisibleRegions");

    Region aboveOpaqueLayers;
    Region aboveCoveredLayers;
    Region dirty;

    outDirtyRegion.clear();

    //按照Z軸順序遍歷Layer
    mDrawingState.traverseInReverseZOrder([&](Layer* layer) {

        //非透明區(qū)域
        Region opaqueRegion;
        //可見區(qū)域
        Region visibleRegion;
        //被遮蓋區(qū)域
        Region coveredRegion;
        //完全透明區(qū)域
        Region transparentRegion;

        if (CC_LIKELY(layer->isVisible())) {
            const bool translucent = !layer->isOpaque(s);
            Rect bounds(layer->computeScreenBounds());
            visibleRegion.set(bounds);
            Transform tr = layer->getTransform();
            if (!visibleRegion.isEmpty()) {
                // 從可見區(qū)域中移除完全透明區(qū)域
                if (translucent) {
                    if (tr.preserveRects()) {                     
                        transparentRegion = tr.transform(s.activeTransparentRegion);
                    } else {                       
                        transparentRegion.clear();
                    }
                }

                // 計(jì)算非透明區(qū)域
                const int32_t layerOrientation = tr.getOrientation();
                if (s.alpha == 1.0f && !translucent &&
                        ((layerOrientation & Transform::ROT_INVALID) == false)) {
                    // the opaque region is the layer's footprint
                    opaqueRegion = visibleRegion;
                }
            }
        }

        // 做覆蓋區(qū)域和可見區(qū)域的交集
        coveredRegion = aboveCoveredLayers.intersect(visibleRegion);

        // 累加當(dāng)前Layer和上層Layer的可見區(qū)域
        aboveCoveredLayers.orSelf(visibleRegion);

        // 從可見區(qū)域中減去非透明區(qū)域
        visibleRegion.subtractSelf(aboveOpaqueLayers);

        // 計(jì)算臟區(qū)
        if (layer->contentDirty) {          
            dirty = visibleRegion;
            dirty.orSelf(layer->visibleRegion);
            layer->contentDirty = false;
        } else {
            //計(jì)算暴露區(qū)域?qū)Σ咦兓?            const Region newExposed = visibleRegion - coveredRegion;
            const Region oldVisibleRegion = layer->visibleRegion;
            const Region oldCoveredRegion = layer->coveredRegion;
            const Region oldExposed = oldVisibleRegion - oldCoveredRegion;
            dirty = (visibleRegion&oldCoveredRegion) | (newExposed-oldExposed);
        }
        //在臟區(qū)中減去被覆蓋的區(qū)域
        dirty.subtractSelf(aboveOpaqueLayers);

        // 累加臟區(qū)
        outDirtyRegion.orSelf(dirty);

        //添加非透明區(qū)域
        aboveOpaqueLayers.orSelf(opaqueRegion);

        // 存儲(chǔ)可見區(qū)域
        layer->setVisibleRegion(visibleRegion);
        layer->setCoveredRegion(coveredRegion);
        layer->setVisibleNonTransparentRegion(
                visibleRegion.subtract(transparentRegion));
    });

    outOpaqueRegion = aboveOpaqueLayers;
}

講完了合成前預(yù)處理和重新構(gòu)建Layer這兩步,接下來就是構(gòu)建硬件合成的任務(wù)列表,合成圖元以及合成后的處理這三步了,由于構(gòu)建硬件合成的任務(wù)列表以及圖層混合是HWComposer處理的,我會(huì)把它列為圖像的消費(fèi)者之一,單獨(dú)來講,所以這里跳過這兩步,直接講最一步操作:圖層混合結(jié)束后的處理

圖層混合結(jié)束后的處理

此時(shí),我們的圖層已經(jīng)混合完成了,圖像數(shù)據(jù)也被送到了幀緩沖,并在屏幕上顯示了,但SurfaceFlinger還會(huì)由一些收尾的工作需要處理,比如釋放圖像緩沖區(qū),更新時(shí)間戳等工作,我們看一下源碼實(shí)現(xiàn),不需要做太深入的了解。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::postComposition(nsecs_t refreshStartTime)
{
    ATRACE_CALL();
    ALOGV("postComposition");

    // 釋放掉Layer中的buffer
    nsecs_t dequeueReadyTime = systemTime();
    for (auto& layer : mLayersWithQueuedFrames) {
        layer->releasePendingBuffer(dequeueReadyTime);
    }

    // |mStateLock| not needed as we are on the main thread
    const sp<const DisplayDevice> hw(getDefaultDisplayDeviceLocked());

    std::shared_ptr<FenceTime> glCompositionDoneFenceTime;
    if (mHwc->hasClientComposition(HWC_DISPLAY_PRIMARY)) {
        glCompositionDoneFenceTime =
                std::make_shared<FenceTime>(hw->getClientTargetAcquireFence());
        mGlCompositionDoneTimeline.push(glCompositionDoneFenceTime);
    } else {
        glCompositionDoneFenceTime = FenceTime::NO_FENCE;
    }
    mGlCompositionDoneTimeline.updateSignalTimes();

    sp<Fence> presentFence = mHwc->getPresentFence(HWC_DISPLAY_PRIMARY);
    auto presentFenceTime = std::make_shared<FenceTime>(presentFence);
    mDisplayTimeline.push(presentFenceTime);
    mDisplayTimeline.updateSignalTimes();

    nsecs_t vsyncPhase = mPrimaryDispSync.computeNextRefresh(0);
    nsecs_t vsyncInterval = mPrimaryDispSync.getPeriod();

    // We use the refreshStartTime which might be sampled a little later than
    // when we started doing work for this frame, but that should be okay
    // since updateCompositorTiming has snapping logic.
    updateCompositorTiming(
        vsyncPhase, vsyncInterval, refreshStartTime, presentFenceTime);
    CompositorTiming compositorTiming;
    {
        std::lock_guard<std::mutex> lock(mCompositorTimingLock);
        compositorTiming = mCompositorTiming;
    }

    mDrawingState.traverseInZOrder([&](Layer* layer) {
        bool frameLatched = layer->onPostComposition(glCompositionDoneFenceTime,
                presentFenceTime, compositorTiming);
        if (frameLatched) {
            recordBufferingStats(layer->getName().string(),
                    layer->getOccupancyHistory(false));
        }
    });

    if (presentFence->isValid()) {
        if (mPrimaryDispSync.addPresentFence(presentFence)) {
            enableHardwareVsync();
        } else {
            disableHardwareVsync(false);
        }
    }

    if (!hasSyncFramework) {
        if (hw->isDisplayOn()) {
            enableHardwareVsync();
        }
    }

    if (mAnimCompositionPending) {
        mAnimCompositionPending = false;

        if (presentFenceTime->isValid()) {
            mAnimFrameTracker.setActualPresentFence(
                    std::move(presentFenceTime));
        } else {
            // The HWC doesn't support present fences, so use the refresh
            // timestamp instead.
            nsecs_t presentTime =
                    mHwc->getRefreshTimestamp(HWC_DISPLAY_PRIMARY);
            mAnimFrameTracker.setActualPresentTime(presentTime);
        }
        mAnimFrameTracker.advanceFrame();
    }

    if (hw->getPowerMode() == HWC_POWER_MODE_OFF) {
        return;
    }

    nsecs_t currentTime = systemTime();
    if (mHasPoweredOff) {
        mHasPoweredOff = false;
    } else {
        nsecs_t elapsedTime = currentTime - mLastSwapTime;
        size_t numPeriods = static_cast<size_t>(elapsedTime / vsyncInterval);
        if (numPeriods < NUM_BUCKETS - 1) {
            mFrameBuckets[numPeriods] += elapsedTime;
        } else {
            mFrameBuckets[NUM_BUCKETS - 1] += elapsedTime;
        }
        mTotalTime += elapsedTime;
    }
    mLastSwapTime = currentTime;
}

HWComposer

講完了SurfaceFLinger這一圖像消費(fèi)者,現(xiàn)在輪到第二個(gè)圖像消費(fèi)者——HWComposer了。上面已經(jīng)講到,SurfaceFlinger在重新構(gòu)建Layer棧后,便會(huì)構(gòu)建硬件的合成任務(wù)列表,然后將Layer交給了HWComposer去做圖層混合。我們接著看HWComposer作為圖像的消費(fèi)者,是怎么通過圖層混合的方式去消費(fèi)圖像數(shù)據(jù)的。

創(chuàng)建硬件合成的任務(wù)列表

setUpHWCompser主要做了下面幾件事情

  1. 遍歷每一個(gè)DisplayDevice調(diào)用beginFrame方法,準(zhǔn)備繪制圖元。

  2. 遍歷每一個(gè)DisplayDevice先判斷他的色彩空間。并且設(shè)置顏色矩陣。接著獲取DisplayDevice中需要繪制的Layer,檢查是否創(chuàng)建了hwcLayer,沒有則創(chuàng)建,創(chuàng)建失敗則設(shè)置forceClientComposition,強(qiáng)制設(shè)置為Client渲染模式,即OpenGL ES渲染。最后調(diào)用setGeometry。

  3. 遍歷每一個(gè)DisplayDevice,根據(jù)DataSpace,進(jìn)一步處理是否需要強(qiáng)制使用Client渲染模式,最后調(diào)用layer的setPerFrameData方法。

  4. 遍歷每一個(gè)DisplayDevice,調(diào)用prepareFrame準(zhǔn)備數(shù)據(jù)。

我們看一下具體的源碼實(shí)現(xiàn)

void SurfaceFlinger::setUpHWComposer() {
    ATRACE_CALL();
    ALOGV("setUpHWComposer");

    //遍歷每一個(gè)DisplayDevice調(diào)用beginFrame方法,準(zhǔn)備繪制圖元。
    for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
        bool dirty = !mDisplays[dpy]->getDirtyRegion(false).isEmpty();
        bool empty = mDisplays[dpy]->getVisibleLayersSortedByZ().size() == 0;
        bool wasEmpty = !mDisplays[dpy]->lastCompositionHadVisibleLayers;

        //如果沒有臟區(qū)或者沒有可見的Layer,這不進(jìn)行合成
        bool mustRecompose = dirty && !(empty && wasEmpty);

        mDisplays[dpy]->beginFrame(mustRecompose);

        if (mustRecompose) {
            mDisplays[dpy]->lastCompositionHadVisibleLayers = !empty;
        }
    }

    // 構(gòu)建HWComposer硬件工作任務(wù)
    if (CC_UNLIKELY(mGeometryInvalid)) {
        mGeometryInvalid = false;
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            sp<const DisplayDevice> displayDevice(mDisplays[dpy]);
            const auto hwcId = displayDevice->getHwcDisplayId();
            if (hwcId >= 0) {
                const Vector<sp<Layer>>& currentLayers(
                        displayDevice->getVisibleLayersSortedByZ());
                for (size_t i = 0; i < currentLayers.size(); i++) {
                    const auto& layer = currentLayers[i];
                    if (!layer->hasHwcLayer(hwcId)) {
                        //為每個(gè)Layer創(chuàng)建hwcLayer
                        auto hwcLayer = mHwc->createLayer(hwcId);
                        if (hwcLayer) {
                            layer->setHwcLayer(hwcId, std::move(hwcLayer));
                        } else {
                            //hwcLayer創(chuàng)建失敗是采用OpenGL ES渲染
                            layer->forceClientComposition(hwcId);
                            continue;
                        }
                    }
                    //設(shè)置Layer的尺寸
                    layer->setGeometry(displayDevice, i);
                    if (mDebugDisableHWC || mDebugRegion) {
                        layer->forceClientComposition(hwcId);
                    }
                }
            }
        }
    }

    mat4 colorMatrix = mColorMatrix * mDaltonizer();


    for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
        auto& displayDevice = mDisplays[displayId];
        const auto hwcId = displayDevice->getHwcDisplayId();

        if (hwcId < 0) {
            continue;
        }
        // 設(shè)置每個(gè)Dispaly的顏色矩陣
        if (colorMatrix != mPreviousColorMatrix) {
            status_t result = mHwc->setColorTransform(hwcId, colorMatrix);           
        }
        //設(shè)置每一層Layer的顯示數(shù)據(jù)
        for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
            layer->setPerFrameData(displayDevice);
        }

        if (hasWideColorDisplay) {
            android_color_mode newColorMode;
            android_dataspace newDataSpace = HAL_DATASPACE_V0_SRGB;

            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                newDataSpace = bestTargetDataSpace(layer->getDataSpace(), newDataSpace);
            }
            newColorMode = pickColorMode(newDataSpace);

            setActiveColorModeInternal(displayDevice, newColorMode);
        }
    }

    mPreviousColorMatrix = colorMatrix;

    for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
        auto& displayDevice = mDisplays[displayId];
        if (!displayDevice->isDisplayOn()) {
            continue;
        }

        status_t result = displayDevice->prepareFrame(*mHwc);
    }
}

硬件混合圖層

前面HWComposer已經(jīng)構(gòu)建好了合成列表任務(wù),合成的數(shù)據(jù)也都準(zhǔn)備好了,現(xiàn)在就開始圖像的混合工作了。doComposition中調(diào)用doDisplayComposition函數(shù)進(jìn)行合成工作,我們看一下合成工作的具體流程。

void SurfaceFlinger::doComposition() {
    ATRACE_CALL();
    const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
    for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
        const sp<DisplayDevice>& hw(mDisplays[dpy]);
        if (hw->isDisplayOn()) {
            //獲取rebuildLayerStacks時(shí)計(jì)算的當(dāng)前顯示設(shè)備的臟區(qū)域DirtyRegion,。如果是強(qiáng)制重畫,mRepaintEverything為true,那么臟區(qū)域就是整個(gè)屏幕的大小。
            const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));

            // 軟件合成處理
            doDisplayComposition(hw, dirtyRegion);
            //……
          
        }
        hw->compositionComplete();
    }
    //硬件合成
    postFramebuffer();
}

doComposition函數(shù)主要通過doDisplayComposition進(jìn)程軟件合成以及postFramebuffer函數(shù)進(jìn)程硬件合成,我們先看一下doDisplayComposition的實(shí)現(xiàn)邏輯

void SurfaceFlinger::doDisplayComposition(
        const sp<const DisplayDevice>& displayDevice,
        const Region& inDirtyRegion)
{
    
    //不需要Hwc處理或者臟區(qū)不為空
    bool isHwcDisplay = displayDevice->getHwcDisplayId() >= 0;
    if (!isHwcDisplay && inDirtyRegion.isEmpty()) {
        ALOGV("Skipping display composition");
        return;
    }

    ALOGV("doDisplayComposition");

    Region dirtyRegion(inDirtyRegion);

    displayDevice->swapRegion.orSelf(dirtyRegion);

    uint32_t flags = displayDevice->getFlags();
    if (flags & DisplayDevice::SWAP_RECTANGLE) {        
        dirtyRegion.set(displayDevice->swapRegion.bounds());
    } else {
        if (flags & DisplayDevice::PARTIAL_UPDATES) {         
            dirtyRegion.set(displayDevice->swapRegion.bounds());
        } else {          
            dirtyRegion.set(displayDevice->bounds());
            displayDevice->swapRegion = dirtyRegion;
        }
    }

    if (!doComposeSurfaces(displayDevice, dirtyRegion)) return;

    // update the swap region and clear the dirty region
    displayDevice->swapRegion.orSelf(dirtyRegion);

    // Display交換Buffer
    displayDevice->swapBuffers(getHwComposer());
}


doDisplayComposition函數(shù)又調(diào)用doComposeSurfaces完成了圖像合成,我們繼續(xù)深入看看doComposeSurfaces函數(shù)中又是怎么做合成的。

bool SurfaceFlinger::doComposeSurfaces(
        const sp<const DisplayDevice>& displayDevice, const Region& dirty)
{
    
    const auto hwcId = displayDevice->getHwcDisplayId();
    mat4 oldColorMatrix;
    const bool applyColorMatrix = !mHwc->hasDeviceComposition(hwcId) &&
            !mHwc->hasCapability(HWC2::Capability::SkipClientColorTransform);
    if (applyColorMatrix) {
        mat4 colorMatrix = mColorMatrix * mDaltonizer();
        oldColorMatrix = getRenderEngine().setupColorTransform(colorMatrix);
    }

    
    bool hasClientComposition = mHwc->hasClientComposition(hwcId);
    //判斷是否有軟件合成,如果有軟件合成,則用OpenGL ES合成
    if (hasClientComposition) {    
        //……
    }

    const Transform& displayTransform = displayDevice->getTransform();
    //硬件合成處理部分
    if (hwcId >= 0) {        
        bool firstLayer = true;
        for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
            //計(jì)算Layer在屏幕可見區(qū)域內(nèi)的clip
            const Region clip(dirty.intersect(
                    displayTransform.transform(layer->visibleRegion)));
            ALOGV("Layer: %s", layer->getName().string());
            ALOGV("  Composition type: %s",
                    to_string(layer->getCompositionType(hwcId)).c_str());
            if (!clip.isEmpty()) {
                switch (layer->getCompositionType(hwcId)) {
                    case HWC2::Composition::Cursor:
                    case HWC2::Composition::Device:
                    case HWC2::Composition::Sideband:
                    case HWC2::Composition::SolidColor: {
                        const Layer::State& state(layer->getDrawingState());
                        if (layer->getClearClientTarget(hwcId) && !firstLayer &&
                                layer->isOpaque(state) && (state.alpha == 1.0f)
                                && hasClientComposition) {
                            // never clear the very first layer since we're
                            // guaranteed the FB is already cleared
                            layer->clearWithOpenGL(displayDevice);
                        }
                        break;
                    }
                    case HWC2::Composition::Client: {
                        //軟件合成則直接draw到clip上
                        layer->draw(displayDevice, clip);
                        break;
                    }
                    default:
                        break;
                }
            } else {
                ALOGV("  Skipping for empty clip");
            }
            firstLayer = false;
        }
    } else {
        // we're not using h/w composer
        for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
            const Region clip(dirty.intersect(
                    displayTransform.transform(layer->visibleRegion)));
            if (!clip.isEmpty()) {
                layer->draw(displayDevice, clip);
            }
        }
    }

    if (applyColorMatrix) {
        getRenderEngine().setupColorTransform(oldColorMatrix);
    }

    // disable scissor at the end of the frame
    mRenderEngine->disableScissor();
    return true;
}

doComposeSurfaces主要做了這幾件事情:

  1. 如果有軟件合成,則調(diào)用OpenGL ES進(jìn)行軟件合成
  2. 如果硬件合成,則清除通過軟件合成的Layer

通過OpenGL ES進(jìn)行軟件合成的部分我會(huì)在下面講OpenGL ES這個(gè)圖形消費(fèi)者的時(shí)候再深入講。這里我們接著看postFramebuffer,也就是硬件合成的部分。

void SurfaceFlinger::postFramebuffer()
{
    ATRACE_CALL();
    ALOGV("postFramebuffer");

    const nsecs_t now = systemTime();
    mDebugInSwapBuffers = now;

    for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
        auto& displayDevice = mDisplays[displayId];
        if (!displayDevice->isDisplayOn()) {
            continue;
        }
        const auto hwcId = displayDevice->getHwcDisplayId();
        if (hwcId >= 0) {
            //硬件合成
            mHwc->presentAndGetReleaseFences(hwcId);
        }
        //swap buffer接收
        displayDevice->onSwapBuffersCompleted();
        //為下一幀做初始化
        displayDevice->makeCurrent(mEGLDisplay, mEGLContext);
        for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
            sp<Fence> releaseFence = Fence::NO_FENCE;
            if (layer->getCompositionType(hwcId) == HWC2::Composition::Client) {
                releaseFence = displayDevice->getClientTargetAcquireFence();
            } else {
                auto hwcLayer = layer->getHwcLayer(hwcId);
                releaseFence = mHwc->getLayerReleaseFence(hwcId, hwcLayer);
            }
            layer->onLayerDisplayed(releaseFence);
        }
        if (hwcId >= 0) {
            mHwc->clearReleaseFences(hwcId);
        }
    }

    mLastSwapBufferTime = systemTime() - now;
    mDebugInSwapBuffers = 0;

    // |mStateLock| not needed as we are on the main thread
    uint32_t flipCount = getDefaultDisplayDeviceLocked()->getPageFlipCount();
    if (flipCount % LOG_FRAME_STATS_PERIOD == 0) {
        logFrameStats();
    }
}

到這里我們的硬件混合部分就處理完了,接下來SurfaceFlinger就會(huì)執(zhí)行合成后的收尾工作。

在混合的時(shí)候,分為了軟件混合和硬件混合兩部分,我已經(jīng)講了硬件混合的部分,下面接著來看OpenGL ES是怎么做軟件合成的。

OpenGL ES

OPenGL ES既是圖像的生產(chǎn)者,也是圖像的消費(fèi)者,作為一個(gè)消費(fèi)者,它可以進(jìn)行圖層的混合,我們看一下doComposeSurfaces函數(shù)中軟件合成是怎樣處理的。

軟件混合圖層

bool SurfaceFlinger::doComposeSurfaces(
        const sp<const DisplayDevice>& displayDevice, const Region& dirty)
{
    const auto hwcId = displayDevice->getHwcDisplayId();

    mat4 oldColorMatrix;
    const bool applyColorMatrix = !mHwc->hasDeviceComposition(hwcId) &&
            !mHwc->hasCapability(HWC2::Capability::SkipClientColorTransform);
    if (applyColorMatrix) {
        mat4 colorMatrix = mColorMatrix * mDaltonizer();
        oldColorMatrix = getRenderEngine().setupColorTransform(colorMatrix);
    }

    bool hasClientComposition = mHwc->hasClientComposition(hwcId);
    if (hasClientComposition) {
        ALOGV("hasClientComposition");
        #ifdef USE_HWC2
                mRenderEngine->setColorMode(displayDevice->getActiveColorMode());
                mRenderEngine->setWideColor(displayDevice->getWideColorSupport());
        #endif
        if (!displayDevice->makeCurrent(mEGLDisplay, mEGLContext)) {
            ALOGW("DisplayDevice::makeCurrent failed. Aborting surface composition for display %s",
                  displayDevice->getDisplayName().string());
            eglMakeCurrent(mEGLDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);

            // |mStateLock| not needed as we are on the main thread
            if(!getDefaultDisplayDeviceLocked()->makeCurrent(mEGLDisplay, mEGLContext)) {
              ALOGE("DisplayDevice::makeCurrent on default display failed. Aborting.");
            }
            return false;
        }

        // Never touch the framebuffer if we don't have any framebuffer layers
        const bool hasDeviceComposition = mHwc->hasDeviceComposition(hwcId);
        if (hasDeviceComposition) {
            //清除mRenderEngine的狀態(tài)
            mRenderEngine->clearWithColor(0, 0, 0, 0);
        } else {
            //……
        }

        if (displayDevice->getDisplayType() != DisplayDevice::DISPLAY_PRIMARY) {
            // just to be on the safe side, we don't set the
            // scissor on the main display. It should never be needed
            // anyways (though in theory it could since the API allows it).
            const Rect& bounds(displayDevice->getBounds());
            const Rect& scissor(displayDevice->getScissor());
            if (scissor != bounds) {
                // scissor doesn't match the screen's dimensions, so we
                // need to clear everything outside of it and enable
                // the GL scissor so we don't draw anything where we shouldn't

                // enable scissor for this frame
                const uint32_t height = displayDevice->getHeight();
                mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
                        scissor.getWidth(), scissor.getHeight());
            }
        }
    }
    //……
    return true;
}

軟件混合的邏輯中,主要通過mRenderEngine來進(jìn)行處理,主要做的事情有這幾件

  • 指定顏色矩陣 setupColorTransform
  • 指定是否用WideColor setWideColor
  • 指定顏色模式 setColorMode
  • 設(shè)置剪切區(qū) setScissor

RenderEngine是在SurfaceFlinger的init函數(shù)中初始化的,

void SurfaceFlinger::init() {
    //……
    // Get a RenderEngine for the given display / config (can't fail)
    mRenderEngine = RenderEngine::create(mEGLDisplay,
    HAL_PIXEL_FORMAT_RGBA_8888);
    //……

 }

它是對GPU渲染的封裝,包括了 EGLDisplay,EGLContext, EGLConfig,EGLSurface等。至于OpenGL ES到底是什么,又是怎么使用的,我們在這兒只把它當(dāng)作一個(gè)圖像的消費(fèi)者有一個(gè)簡單的認(rèn)識(shí),再下一篇文章中,會(huì)詳細(xì)的介紹圖像的生成者,OpenGL ES作為一個(gè)重要的生產(chǎn)者之一,我們會(huì)對他有一個(gè)更加深入的了解。

總結(jié)

在這一文章帳,我主要講解Android系統(tǒng)中的圖像消費(fèi)者這一模塊,我們了解了SurfaceFlinger,HWComposer,OPenGL ES作為圖像的消費(fèi)者是如何消費(fèi)圖像的,當(dāng)然,我們也可以不使用圖像消費(fèi)者,直接將圖像數(shù)據(jù)送到幀緩沖中,但這樣方式,便沒有了交互式的界面,一個(gè)可交互式的界面是由多個(gè)界面層級組成的,而我們提到的SurfaceFlinger,HWComposer,OPenGL ES是Android中做圖層混合必不可少的三個(gè)工具。有很多細(xì)節(jié)的部分,我沒有太深入展開,因?yàn)檫@三個(gè)模塊都是非常龐大的模塊,但掌握了主要的流程和關(guān)鍵部分的代碼,細(xì)節(jié)的部分,都可以待時(shí)間充裕時(shí)在區(qū)慢慢探索。

在下一篇文章里,我會(huì)講解圖像的生產(chǎn)者,OpenGL ES和Skia,以及他們是如何生產(chǎn)圖像數(shù)據(jù),還會(huì)講解通過OpenGL ES和Skia在Android中的實(shí)際使用,如界面的軟件繪制,硬件繪制,開機(jī)動(dòng)畫,F(xiàn)lutter的顯示等知識(shí)。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 227,224評論 6 529
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 97,916評論 3 413
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 175,014評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,466評論 1 308
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 71,245評論 6 405
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 54,795評論 1 320
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼。 笑死,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 42,869評論 3 440
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 42,010評論 0 285
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 48,524評論 1 331
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 40,487評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 42,634評論 1 366
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,173評論 5 355
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 43,884評論 3 345
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,282評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,541評論 1 281
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 51,236評論 3 388
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 47,623評論 2 370