名稱
libev - 一個 C 編寫的功能全面的高性能事件循環。
概要
#include <ev.h>
示例程序
// a single header file is required
#include <ev.h>
#include <stdio.h> // for puts
// every watcher type has its own typedef'd struct
// with the name ev_TYPE
ev_io stdin_watcher;
ev_timer timeout_watcher;
// all watcher callbacks have a similar signature
// this callback is called when data is readable on stdin
static void
stdin_cb (EV_P_ ev_io *w, int revents)
{
puts ("stdin ready");
// for one-shot events, one must manually stop the watcher
// with its corresponding stop function.
ev_io_stop (EV_A_ w);
// this causes all nested ev_run's to stop iterating
ev_break (EV_A_ EVBREAK_ALL);
}
// another callback, this time for a time-out
static void
timeout_cb (EV_P_ ev_timer *w, int revents)
{
puts ("timeout");
// this causes the innermost ev_run to stop iterating
ev_break (EV_A_ EVBREAK_ONE);
}
int
main (void)
{
// use the default event loop unless you have special needs
struct ev_loop *loop = EV_DEFAULT;
// initialise an io watcher, then start it
// this one will watch for stdin to become readable
ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ);
ev_io_start (loop, &stdin_watcher);
// initialise a timer watcher, then start it
// simple non-repeating 5.5 second timeout
ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
ev_timer_start (loop, &timeout_watcher);
// now wait for events to arrive
ev_run (loop, 0);
// break was called, so exit
return 0;
}
關于 libev
Libev 是一個事件循環:你注冊對某些事件感興趣(比如文件描述符可讀或超時發生),它將管理這些事件源并為你的程序提供事件。
為了做到這一點,它必須通過執行 事件循環 處理器或多或少完全接管你的進程(或線程),然后通過回調機制通知事件。
你通過注冊所謂的 事件觀察者 注冊對某些事件感興趣,它們都是你以事件的詳細信息初始化的非常小的 C 結構,然后通過 starting 觀察者移交給 libev 的。
特性
Libev 支持文件描述符事件的 select
,poll
,Linux 特有的 epoll
,BSD 特有的 kqueue
以及 Solaris 特有的事件端口機制 (ev_io
),Linux 的 inotify
接口 (ev_stat
),Linux eventfd/signalfd(用于更快更干凈的線程間喚醒 (ev_async
)/信號處理 (ev_signal
)),相對定時器 (ev_timer
),定制重新調度邏輯的絕對定時器 (ev_periodic
),同步的信號 (ev_signal
),進程狀態變化事件 (ev_child
),以及處理事件循環機制自身的事件觀察者 (ev_idle
,ev_embed
,ev_prepare
和 ev_check
觀察者) 和文件觀察者 (ev_stat
),甚至是對 fork 事件的有限支持 (ev_fork
)。
它還相當塊。
約定
Libev 是充分可配置的。在這份手冊中描述默認的(及最常見的)配置,其支持多事件循環。更多關于各種配置選項的信息請參考本手冊 EMBED 一節。如果 libev 被配置為不支持多個事件循環,那么使用名為 loop
的初始參數(總是類型為 struct ev_loop *
)的所有函數將不具有此參數。
時間表示
Libev 將時間表示為一個浮點數表示時,表示自(POSIX)新紀元(實際上自 1970 年左右開始,細節很復雜,不要問)開始的秒數(小數)。這個類型被稱為 ev_tstamp
,它也應該是你使用的類型。在 C 中它通常是 double
類型的別名。當你需要對它做任何計算時,你應該將它視為一些浮點值。
不像其名字中的 stamp
組件可能指示的那樣,在 libev 中它也用于時間差值(比如延遲)。
錯誤處理
Libev 認為存在三種類型的錯誤:操作系統錯誤,用法錯誤和內部錯誤(bugs)。
當 libev 捕獲了一個它無法處理的操作系統錯誤(比如一個系統調用指示了一個 libev 無法修復的條件)時,它調用通過 ev_set_syserr_cb
設置的回調,它應該修復問題或終止。默認的回調是打印一條診斷信息并調用 abort()
。
當 libev 探測到一個用法錯誤,比如一個負的定時器間隔,則它將打印一條診斷信息并終止(通過 assert
機制,NDEBUG
將禁用這項檢查):這些是 libev 調用者中出現的編程錯誤,且需要在那里被修復。
Libev 還有一些內部的錯誤檢查斷言,以及一些擴展的一致性檢查代碼。通常情況下它們不會被觸發,它們通常表示 libev 中出現了一個 bug 或更糟。
全局函數
這些函數可以隨時調用,甚至在以任何方式初始化庫之前。
ev_tstamp ev_time ()
返回以 libev 所使用的格式的當前時間。注意 ev_now
函數通常更快,且也常常返回你實際想知道的時間戳。ev_now_update
和 ev_now
的結合也很有意思。
ev_sleep (ev_tstamp interval)
休眠給定的時間:當前線程將阻塞,直到它被中斷或經過了給定的時間間隔(大約 - 即使不中斷,它可能也會早返回一點)。如果 interval <= 0
就立即返回。
基本上這是一個粒度比秒更高的 sleep()
。
interval
的范圍是有限的 - libev 只保證最長一天 (interval <= 86400
) 的休眠時間是可以工作的。
int ev_version_major ()
int ev_version_minor ()
你可以通過調用函數 ev_version_major
和 ev_version_minor
找到你鏈接的庫的主次 ABI 版本號。如果你想,你可以拿來與全局符號 EV_VERSION_MAJOR
和 EV_VERSION_MINOR
做比較,它們描述了你的程序編譯所基于的庫的版本。
這些版本號引用了庫的 ABI 版本,而不是發布版本。
通常,主板本號不匹配時終止程序比較好,因為這指示了不兼容的修改。次版本號通常都與老版本兼容,因此,更大的次版本號通常都不是什么問題。
示例:確保我們沒有被無意地鏈接到錯誤的版本(注意,然而,這無法探測其它的 ABI 不匹配,比如 LFS 或可重入性)。
assert (("libev version mismatch",
ev_version_major () == EV_VERSION_MAJOR
&& ev_version_minor () >= EV_VERSION_MINOR));
unsigned int ev_supported_backends ()
返回編譯進 libev 二進制(獨立于你正在運行的系統上它們的可用性)中的所有后端的集合(比如,它們的對應 EV_BACKEND_*
值)。參考 ev_default_loop
獲得這些值的描述。
示例:確保我們具有 epoll 方法,因為是的,這是很酷,一定有,我們可以有一個它的洪流!!!11
assert (("sorry, no epoll, no sex",
ev_supported_backends () & EVBACKEND_EPOLL));
unsigned int ev_recommended_backends ()
返回編譯進 libev 二進制文件且建議本平臺使用的所有后端的集合,意味著它將可以用于大多數的文件描述符類型。這個集合通常比 ev_supported_backends
返回的要小,例如,大多數 BSD 上的 kqueue 都不會使用,除非你明確要求(假設你知道你在做什么),否則不會自動檢測。如果你沒有顯式地指定,這是 libev 將探測的后端集合。
unsigned int ev_embeddable_backends ()
返回其它事件循環中可嵌入的后端的集合。這個值是平臺特有的,但可以包含當前系統不可用的后端。為了找出當前系統可能支持的可嵌入后端,你需要查看 ev_embeddable_backends () & ev_supported_backends ()
,同樣的建議采用的那些。
參考 ev_embed
觀察者的描述來獲得更多信息。
ev_set_allocator (void (cb)(void *ptr, long size) throw ())
設置所用的分配函數(原型都是類似的 - 語義與 realloc
C89/SuS/POSIX 函數一致)。它用于分配和釋放內存(這里不要驚訝)。如果當需要分配內存時它返回零 (size != 0
),庫可能會終止或執行一些潛在的破壞性的行為。
由于一些系統(至少是 OpenBSD 和 Darwin)無法實現正確的 realloc
語義,libev 將默認使用一個基于系統的 realloc
和 free
函數的封裝。
你可以在高可用性程序中覆蓋這個函數,比如,如果它無法分配內存就釋放一些內存,使用一個特殊的分配器,或者甚至是休眠一會兒并重試直到有內存可用。
示例:用一個等待一會兒并重試的分配器替換 libev 分配器(例子需要一個與標準兼容的 realloc
)。
static void *
persistent_realloc (void *ptr, size_t size)
{
for (;;)
{
void *newptr = realloc (ptr, size);
if (newptr)
return newptr;
sleep (60);
}
}
. . .
ev_set_allocator (persistent_realloc);
ev_set_syserr_cb (void (*cb)(const char *msg) throw ())
設置在一個可重試系統調用錯誤(比如 select,poll,epoll_wait 失敗)發生時調用的回調函數。消息是一個可打印的字符串,表示導致問題產生的系統調用或子系統。如果設置了這個回調,則 libev 將期待它補救這種狀況,無論何時何地它返回。即 libev 通常將重試請求的操作,或者如果條件沒有消失,執行 bad stuff(比如終止程序)。
示例:這基本上也是 libev 內部所做的事情。
static void
fatal_error (const char *msg)
{
perror (msg);
abort ();
}
. . .
ev_set_syserr_cb (fatal_error);
ev_feed_signal (int signum)
這個函數可被用于 “模擬” 一個信號接收。在任何時候,任何上下文,包括信號處理器或隨機線程,調用這個函數都是完全安全的。
它的主要用途是在你的進程中定制信號處理。比如,你可以默認在所有線程中阻塞信號(當創建任何 loops 時指定 EVFLAG_NOSIGMASK
),然后在一個線程中,使用 sigwait
或其它的機制來等待信號,然后通過調用 ev_feed_signal
將它們
“傳送” 給 libev。
控制事件循環的函數
事件循環有一個 struct ev_loop *
描述(在這個場景下 struct
不是 可選的,除非 libev 3 兼容性被禁用,因為 libev 3 有一個 ev_loop
函數與結構體名字沖突)。
庫了解兩種類型的循環,default 循環支持子進程事件,而動態創建的事件循環不支持。
struct ev_loop *ev_default_loop (unsigned int flags)
它返回 "default" 的事件循環對象,它是你通常在只想要個 "事件循環" 時應該使用的。事件循環對象和 flags
參數在 ev_loop_new
的部分會有更詳細的描述。
如果默認的循環已經初始化了,則這個函數簡單的返回它(并忽略 flags。如果這令你煩惱,則檢查 ev_backend()
)。否則它將以給定的 flags
創建它,這幾乎總是 0
,除非調用者也是 ev_run
的調用者,或這否則是 “主程序”。
如果你不知道使用什么事件循環,則使用這個函數返回的那個(或通過 EV_DEFAULT
宏)。
注意這個函數 不是 線程安全的,因此如果你想在多個線程中使用它,則你不得不使用某種互斥量(還要注意,這種情況是不可能的,因為循環不能在線程之間容易地共享)。
默認的循環是僅有的可以處理 ev_child
觀察者的循環,為了做到這一點,它總是為 SIGCHLD
注冊一個處理程序。如果這對你的應用是一個問題,你可以通過 ev_loop_new
創建一個動態的循環,它不會那樣做,或你可以簡單地在調用 ev_default_init
之后 覆蓋 SIGCHLD
信號處理程序。
示例:這是最典型的用法。
if (!ev_default_loop (0))
fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?");
示例:限制 libev 使用 select 和 poll,且不允許把環境設置考慮進去:
ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV);
struct ev_loop *ev_loop_new (unsigned int flags)
這將創建并初始化一個新的事件循環對象。如果循環無法初始化,則返回 false。
這個函數是線程安全的,與線程一起使用 libev 的一種常見方式是為每個線程創建一個循環,并在 “主” 或 “初始化” 線程中使用默認的循環。
flags 參數可被用于指定特殊的行為或要使用的特定后端,且通常被指定為 0
(或 EVFLAG_AUTO
)。
libev 支持下列標記:
EVFLAG_AUTO
默認的 flags 值。如果你沒有線索就是用它(這是對的,相信我)。EVFLAG_NOENV
如果標記值中設置了這個標記位(或程序以 setuid 或 setgid 運行),則 libev 將 不會 查看環境變量LIBEV_FLAGS
。否則(默認的),如果在環境中找到了標記則該環境變量將完全覆蓋標記。這在嘗試特定的后端來測試其性能,繞過 bugs,或使得 libev 線程安全(訪問環境變量無法以線程安全的方式完成,但通常在沒有其它線程修改它們時可以工作)時很有用。EVFLAG_FORKCHECK
除了在 fork 之后手動地調用ev_loop_fork
,你還可以通過啟用這個標記讓 libev 在每個迭代中檢查 fork。
它通過在循環的每一次迭代中調用 getpid()
來工作,如果你執行大量的循環迭代但只做一點實際的工作,則這將可能會降低你的事件循環的速度,但它通常不明顯(比如在我的 GNU/Linux 系統上,getpid
實際上是一個簡單的 5-insn 序列而沒有系統調用,因此非常塊,但是我的 GNU/Linux 還有 pthread_atfork
,它甚至更快)。
當你使用這個標記時這個標記的巨大的好處是你可以忘記 fork(并忘記忘記告訴 libev 關于 fork,盡管你依然不得不忽略 SIGPIPE
)。
這個標記不能被 LIBEV_FLAGS 環境變量的值覆蓋或指定。
EVFLAG_NOINOTIFY
這個標記被指定時,則 libev 將不會試圖為它的ev_stat
觀察者使用 inotify API。除了調試和測試之外,這個標志對于保全 inotify 文件描述符是非常有用的,否則使用 ev_stat 觀察者的每個循環消耗一個
inotify 句柄。EVFLAG_SIGNALFD
當設置這個標記時,則 libev 將試圖為它的ev_signal
(和ev_child
) 觀察者使用 signalfd API。這個 API 同步地傳遞信號,這使它更快且可能使它能夠獲得入隊的信號數據。只要你在對處理信號不感興趣的線程中正確地阻塞信號,它也可以簡化多線程中的信號處理。
默認情況下,signalfd 不會被使用,因為這會改變你的信號掩碼,并且有很多很好的庫和程序(例如,glib 的線程池)無法正確初始化它們的信號掩碼
-
EVFLAG_NOSIGMASK
當指定這個標記時,則 libev 將避免修改信號掩碼。特別地,這意味著當你想接收信號時你不得不確保它們是未阻塞的。
當你想要執行你自己的信號處理,或想要僅在特定的線程中處理信號并想要避免 libev 不阻塞信號時,這個行為很有用。
在一個線程的程序中它也是 POSIX 要求的,由于 libev 調用 sigprocmask
,其行為是未正式定義的。
這個標記的行為將在未來的 libev 版本中變為默認的行為。
EVBACKEND_SELECT (值為 1,可移植的 select 后端)
這是你的標準 select(2) 后端。不完全標準,因為 libev 嘗試滾動自己的 fd_set 而不限制 fds 的數量,但是如果失敗,則期望在使用此后端時 fds 的數量相當低的限制。它不能太好地放縮(O(highest_fd)),但對于少量的(low-numbered :)fds 它通常是最快的后端。
為了從這個后端獲得良好的性能你需要大量的并發(大多數文件描述符應該處于忙碌狀態)。如果你在編寫一個服務器,你應該在循環的
accept()
的一個迭代中接受盡可能多的連接。你也許會想要看一下ev_set_io_collect_interval()
來增加每個迭代你獲得的可讀性通知的數量。
這個后端把EV_READ
映射到readfds
集合,并把EV_WRITE
映射到writefds
集合(為了繞過 Microsoft Windows bugs,還可以在該平臺上設置的 exceptfds)。EVBACKEND_POLL (值為 2,poll 后端,除了 windows 外的其它地方都可用)
這是你的標準 poll(2) 后端。它比 select 更復雜,但對稀疏 fds 的處理更好,且對你可以使用的 fds 的個數沒有人為限制(除了在非活躍 fds 比較多時,它將大大減慢)。參考上面的EVBACKEND_SELECT
的條目,獲得性能提示。
這個后端把EV_READ
映射為POLLIN | POLLERR | POLLHUP
,把EV_WRITE
映射為POLLOUT | POLLERR | POLLHUP
。EVBACKEND_EPOLL (value 4, Linux)
使用 linux 特有的 epoll(7) 接口(2.6.9 之前和之后的內核版本都是)。
對于一些 fds,這個后端可能比 poll 和 select 慢一點,但它的表現更好。盡管 poll 和 select 通常的表現大概為 O(total_fds),其中 total_fds 是 fds 的總個數(或最高的 fd), epoll 的表現為 O(1) 或 O(active_fds)。
epoll 值得可敬的提及,作為更高級事件機制的最為錯誤的設計:僅有的煩惱包括安靜地丟棄文件描述符,每個文件描述符的每次改變要求一個系統調用(及不必要地參數猜測),dup 的問題,在超時值之前返回,導致額外的迭代(精度只有 5 ms,而 select 在相同的平臺上精度為 0.1 ms)等等。然而最大的問題是 fork 競爭 - 如果一個程序了 fork 了則父和子進程 都 不得不重建 epoll 集合,這將消耗相當大的時間(每個文件描述符一次系統調用)且當然難以探測。
Epoll 也是臭名昭著的 - 嵌入式的 epoll fds 應該 工作,但是當然 不能,相對于向集合中注冊(特別是在 SMP 系統上),epoll 只喜歡報告完全 不同的 文件描述符(甚至是已經關閉的那些,因此甚至無法從集合中移除它們)的事件。Libev 試圖通過使用一個額外的生成計數器來對付這些虛假的通知,并將其與事件進行比較以過濾掉虛假的通知,并在需要的時候重建集合。Epoll 也錯誤地舍棄了超時,但是沒有辦法知道什么時候和多少,所以有時你必須忙-等待,因為 epoll 會立即返回,盡管超時非零。最后,它也拒絕使用可以在 select(文件,許多字符設備...)中完美使用的一些文件描述符。
Epoll 真的是事件 poll 機制中的火車殘骸,一個 frankenpoll,匆忙拼湊在一起,沒有想到設計或與他人互動。 哦,痛苦,會不會停止 . . .
盡管在相同的迭代中停止,設置并啟動 I/O 觀察者將導致一些緩存,然而每個這種事件仍然有一個系統調用(因為現在相同的 文件描述符 可能指向不同的 文件描述),所以最好避免。而且,如果你為兩個文件描述注冊事件,dup()
的文件描述符可能不能很好的工作。
該后端的最佳性能是通過在關閉之前盡可能不注銷文件描述符的所有觀察者來實現的,比如任何時候每個 fd 都保持至少一個觀察者活躍。停止并啟動一個觀察者(沒有重新設置它)也通常不導致額外的開銷。一個 fork 可能同時導致虛假的通知,及 libev 不得不銷毀并重建 epoll 對象,這可能消耗大量的時間,且這是應該避免的。
所有的這些意味著,在實踐上,EVBACKEND_SELECT
對于至多上百個文件描述符可能像 epoll 一樣快或更快,依賴于用法。多么的悲傷啊。
盡管名義上可以嵌入到其它事件循環中,但到目前為止這個功能在所有內核版本上的測試都是爛的。
這個后端映射EV_READ
與EV_WRITE
的方式與EVBACKEND_POLL
的相同。EVBACKEND_KQUEUE (值為 8, most BSD clones)
Kqueue deserves special mention, as at the time of this writing, it was broken on all BSDs except NetBSD (usually it doesn't work reliably with anything but sockets and pipes, except on Darwin, where of course it's completely useless). Unlike epoll, however, whose brokenness is by design, these kqueue bugs can (and eventually will) be fixed without API changes to existing programs. For this reason it's not being "auto-detected" unless you explicitly specify it in the flags (i.e. usingEVBACKEND_KQUEUE
) or libev was compiled on a known-to-be-good (-enough) system like NetBSD.
You still can embed kqueue into a normal poll or select backend and use it only for sockets (after having made sure that sockets work with kqueue on the target platform). Seeev_embed
watchers for more info.
It scales in the same way as the epoll backend, but the interface to the kernel is more efficient (which says nothing about its actual speed, of course). While stopping, setting and starting an I/O watcher does never cause an extra system call as withEVBACKEND_EPOLL
, it still adds up to two event changes per incident. Support forfork ()
is very bad (you might have to leak fd's on fork, but it's more sane than epoll) and it drops fds silently in similarly hard-to-detect cases.
This backend usually performs well under most conditions.
While nominally embeddable in other event loops, this doesn't work everywhere, so you might need to test for this. And since it is broken almost everywhere, you should only use it when you have a lot of sockets (for which it usually works), by embedding it into another event loop (e.g.EVBACKEND_SELECT
orEVBACKEND_POLL
(butpoll
is of course also broken on OS X)) and, did I mention it, using it only for sockets.
This backend mapsEV_READ
into anEVFILT_READ
kevent withNOTE_EOF
, andEV_WRITE
into anEVFILT_WRITE
kevent withNOTE_EOF
.EVBACKEND_DEVPOLL (值為 16,Solaris 8)
這還沒有實現(可能從不會實現了,除非你給我發一個實現)。根據報告,/dev/poll
只支持 sockets,且不是可嵌入的,這將大大限制這個后端的有用性。EVBACKEND_PORT (值為 32,Solaris 10)
This uses the Solaris 10 event port mechanism. As with everything on Solaris, it's really slow, but it still scales very well (O(active_fds)).
While this backend scales well, it requires one system call per active file descriptor per loop iteration. For small and medium numbers of file descriptors a "slow"EVBACKEND_SELECT
orEVBACKEND_POLL
backend might perform better.
On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable, which is a rare feat among the OS-specific backends (I vastly prefer correctness over speed hacks).
On the negative side, the interface is bizarre - so bizarre that even sun itself gets it wrong in their code examples: The event polling function sometimes returns events to the caller even though an error occurred, but with no indication whether it has done so or not (yes, it's even documented that way) - deadly for edge-triggered interfaces where you absolutely have to know whether an event occurred or not because you have to re-arm the watcher.
Fortunately libev seems to be able to work around these idiocies.
This backend mapsEV_READ
andEV_WRITE
in the same way asEVBACKEND_POLL
.EVBACKEND_ALL
嘗試所有的后端(甚至是在用EVFLAG_AUTO
時不會嘗試的潛在的爛的那些)。由于它是一個掩碼,你可以做一些事情,比如EVBACKEND_ALL & ~EVBACKEND_KQUEUE
。
絕對不推薦使用這個標志,使用ev_recommended_backends()
返回的那些,或者簡單地不指定后端。EVBACKEND_MASK
不是一個后端,一個用來從flags
值中選擇所有的后端位的掩碼,在你想從標志值屏蔽任何后端的情況下(比如當修改LIBEV_FLAGS
環境變量時)。
如果標記值中有一個或多個后端標記,則只會嘗試這些后端(以這里列出的相反的順序)。如果沒有指定,則會嘗試 ev_recommended_backends()
中的所有后端。
示例:嘗試創建一個只使用 epoll 的事件循環。
struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV);
if (!epoller)
fatal ("no epoll found here, maybe it hides under your chair");
示例:使用 libev 提供的,但確保在 kqueue 可用時使用了它。
struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);
ev_loop_destroy (loop)
銷毀一個事件循環對象(釋放所有的內存和內核狀態等等)。沒有一個活躍的觀察者將在正常意義上停止,比如,ev_is_active
可能依然返回 true。在調用這個函數 之前 你自己干凈地停止所有的觀察者,或者事后處理它們(這通常是最簡單的事情,比如你可以只是忽略觀察者并/或 free() 它們
)都是你的責任。
注意某些全局狀態,比如信號狀態(及安裝的信號處理程序),將不會被這個函數釋放,及相關的觀察者(比如信號和 chile 觀察者)將需要手動地停止。
這個函數通常作用于 ev_loop_new
分配的 loop 對象,但它也可以用于 ev_default_loop
返回的默認的 loop,只是在這種情況下不是線程安全的。
注意不建議對默認的 loop 調用這個函數,除了在極少的你真的需要釋放它的資源的情況下。如果你需要動態地分配 loops,則最好使用 ev_loop_new
和 ev_loop_destroy
。
ev_loop_fork (loop)
This function sets a flag that causes subsequent ev_run iterations to reinitialise the kernel state for backends that have one. Despite the name, you can call it anytime you are allowed to start or stop watchers (except inside an ev_prepare callback), but it makes most sense after forking, in the child process. You must call it (or use EVFLAG_FORKCHECK) in the child before resuming or calling ev_run.
In addition, if you want to reuse a loop (via this function or EVFLAG_FORKCHECK), you also have to ignore SIGPIPE.
Again, you have to call it on any loop that you want to re-use after a fork, even if you do not plan to use the loop in the parent. This is because some kernel interfaces cough kqueue cough do funny things during fork.
On the other hand, you only need to call this function in the child process if and only if you want to use the event loop in the child. If you just fork+exec or create a new loop in the child, you don't have to call it at all (in fact, epoll is so badly broken that it makes a difference, but libev will usually detect this case on its own and do a costly reset of the backend).
The function itself is quite fast and it's usually not a problem to call it just in case after a fork.
Example: Automate calling ev_loop_fork on the default loop when using pthreads.
static void
post_fork_child (void)
{
ev_loop_fork (EV_DEFAULT);
}
...
pthread_atfork (0, 0, post_fork_child);
int ev_is_default_loop (loop)
當給定的 loop 實際上是默認的 loop 時返回 ture,其它為 false。
unsigned int ev_iteration (loop)
返回事件循環當前迭代的次數,它與 libev 為新事件執行 poll 的次數一致。它從0
開始,并且愉快地包裝足夠的迭代。
This value can sometimes be useful as a generation counter of sorts (it "ticks" the number of loop iterations), as it roughly corresponds with ev_prepare and ev_check calls - and is incremented between the prepare and check phases.
unsigned int ev_depth (loop)
Returns the number of times ev_run was entered minus the number of times ev_run was exited normally, in other words, the recursion depth.
Outside ev_run, this number is zero. In a callback, this number is 1, unless ev_run was invoked recursively (or from another thread), in which case it is higher.
Leaving ev_run abnormally (setjmp/longjmp, cancelling the thread, throwing an exception etc.), doesn't count as "exit" - consider this as a hint to avoid such ungentleman-like behaviour unless it's really convenient, in which case it is fully supported.
unsigned int ev_backend (loop)
返回 EVBACKEND_*
標記中的一個,以指明使用的事件后端。
ev_tstamp ev_now (loop)
返回當前的 “事件循環時間”,它是事件循環接收事件并開始處理它們的時間。回調一被處理,這個時間戳就不會變了,它還是用于相對定時器的基時間。你可以把它當作事件發生(或更正確地說,libev 發現它)的時間。
ev_now_update (loop)
Establishes the current time by querying the kernel, updating the time returned by ev_now () in the progress. This is a costly operation and is usually done automatically within ev_run ().
This function is rarely useful, but when some event callback runs for a very long time without entering the event loop, updating libev's idea of the current time is a good idea.
See also "The special problem of time updates" in the ev_timer
section.
ev_suspend (loop)
ev_resume (loop)
這兩個函數掛起并恢復一個事件循環,當 loop 有一段事件不用,且超時不應該被處理時使用。
典型的使用場景是交互式的程序,比如游戲:當用戶按下 ^Z
掛起游戲,并在一小時后恢復,對于超時最好的處理是在程序掛起期間就像時間沒有流逝一樣。這可以通過在你的 SIGTSTP
處理程序中調用 ev_suspend
,給你自己發送一個 SIGSTOP
并在之后直接調用 ev_resume
恢復定時器處理來實現。
Effectively, all ev_timer watchers will be delayed by the time spend between ev_suspend and ev_resume, and all ev_periodic watchers will be rescheduled (that is, they will lose any events that would have occurred while suspended). After calling ev_suspend you must not call any function on the given loop other than ev_resume, and you must not call ev_resume without a previous call to ev_suspend.
Calling ev_suspend/ev_resume has the side effect of updating the event loop time (see ev_now_update).
bool ev_run (loop, int flags)
Finally, this is it, the event handler. This function usually is called after you have initialised all your watchers and you want to start handling events. It will ask the operating system for any new events, call the watcher callbacks, and then repeat the whole process indefinitely: This is why event loops are called loops.
If the flags argument is specified as 0, it will keep handling events until either no event watchers are active anymore or ev_break was called.
The return value is false if there are no more active watchers (which usually means "all jobs done" or "deadlock"), and true in all other cases (which usually means " you should call ev_run again").
Please note that an explicit ev_break is usually better than relying on all watchers to be stopped when deciding when a program has finished (especially in interactive programs), but having a program that automatically loops as long as it has to and no longer by virtue of relying on its watchers stopping correctly, that is truly a thing of beauty.
This function is mostly exception-safe - you can break out of a ev_run call by calling longjmp in a callback, throwing a C++ exception and so on. This does not decrement the ev_depth value, nor will it clear any outstanding EVBREAK_ONE breaks.
A flags value of EVRUN_NOWAIT will look for new events, will handle those events and any already outstanding ones, but will not wait and block your process in case there are no events and will return after one iteration of the loop. This is sometimes useful to poll and handle new events while doing lengthy calculations, to keep the program responsive.
A flags value of EVRUN_ONCE will look for new events (waiting if necessary) and will handle those and any already outstanding ones. It will block your process until at least one new event arrives (which could be an event internal to libev itself, so there is no guarantee that a user-registered callback will be called), and will return after one iteration of the loop.
This is useful if you are waiting for some external event in conjunction with something not expressible using other libev watchers (i.e. "roll your own ev_run"). However, a pair of ev_prepare/ev_check watchers is usually a better approach for this kind of thing.
Here are the gory details of what ev_run does (this is for your understanding, not a guarantee that things will work exactly like this in future versions):
- Increment loop depth.
- Reset the ev_break status.
- Before the first iteration, call any pending watchers.
LOOP:
- If EVFLAG_FORKCHECK was used, check for a fork.
- If a fork was detected (by any means), queue and call all fork watchers.
- Queue and call all prepare watchers.
- If ev_break was called, goto FINISH.
- If we have been forked, detach and recreate the kernel state
as to not disturb the other process.
- Update the kernel state with all outstanding changes.
- Update the "event loop time" (ev_now ()).
- Calculate for how long to sleep or block, if at all
(active idle watchers, EVRUN_NOWAIT or not having
any active watchers at all will result in not sleeping).
- Sleep if the I/O and timer collect interval say so.
- Increment loop iteration counter.
- Block the process, waiting for any events.
- Queue all outstanding I/O (fd) events.
- Update the "event loop time" (ev_now ()), and do time jump adjustments.
- Queue all expired timers.
- Queue all expired periodics.
- Queue all idle watchers with priority higher than that of pending events.
- Queue all check watchers.
- Call all queued watchers in reverse order (i.e. check watchers first).
Signals and child watchers are implemented as I/O watchers, and will
be handled here by queueing them when their watcher gets executed.
- If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT
were used, or there are no active watchers, goto FINISH, otherwise
continue with step LOOP.
FINISH:
- Reset the ev_break status iff it was EVBREAK_ONE.
- Decrement the loop depth.
- Return.
Example: Queue some jobs and then loop until no events are outstanding anymore.
... queue jobs here, make sure they register event watchers as long
... as they still have work to do (even an idle watcher will do..)
ev_run (my_loop, 0);
... jobs done or somebody called break. yeah!
ev_break (loop, how)
可被用于執行一個調用使 ev_run
提前返回(但是只有在其處理完了所有outstanding 事件之后)。其中的 how
參數必須是 EVBREAK_ONE
,它使最內層的 ev_run
返回,或者是 EVBREAK_ALL,它使所有嵌套的 ev_run
返回。
這個 "break 狀態" 將在下次調用 ev_run
時被清除。
在任何 ev_run
調用之外調用 ev_break
也是安全的,只是在那種情況下不起作用。
ev_ref (loop)
ev_unref (loop)
Ref/unref 可以被用于添加或移除一個事件循環的引用計數。每個觀察者持有一個引用,只要引用計數為非零,ev_run
就不會自己返回。
當你有一個你從不想注銷的觀察者時,它很有用,但是,盡管如此,不應該使 ev_run
不能返回。在這種情況下,在 ev_ref
之前調用 ev_unref
啟動之后將停止它。
As an example, libev itself uses this for its internal signal pipe: It is not visible to the libev user and should not keep ev_run from exiting if no event watchers registered by it are active. It is also an excellent way to do this for generic recurring timers or from within third-party libraries. Just remember to unref after start and ref before stop (but only if the watcher wasn't active before, or was active before, respectively. Note also that libev might stop watchers itself (e.g. non-repeating timers) in which case you have to ev_ref in the callback).
Example: Create a signal watcher, but keep it from keeping ev_run running when nothing else is active.
ev_signal exitsig;
ev_signal_init (&exitsig, sig_cb, SIGINT);
ev_signal_start (loop, &exitsig);
ev_unref (loop);
Example: For some weird reason, unregister the above signal handler again.
ev_ref (loop);
ev_signal_stop (loop, &exitsig);
ev_set_io_collect_interval (loop, ev_tstamp interval)
ev_set_timeout_collect_interval (loop, ev_tstamp interval)
These advanced functions influence the time that libev will spend waiting for events. Both time intervals are by default 0, meaning that libev will try to invoke timer/periodic callbacks and I/O callbacks with minimum latency.
Setting these to a higher value (the interval must be >= 0) allows libev to delay invocation of I/O and timer/periodic callbacks to increase efficiency of loop iterations (or to increase power-saving opportunities).
The idea is that sometimes your program runs just fast enough to handle one (or very few) event(s) per loop iteration. While this makes the program responsive, it also wastes a lot of CPU time to poll for new events, especially with backends like select () which have a high overhead for the actual polling but can deliver many events at once.
By setting a higher io collect interval you allow libev to spend more time collecting I/O events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both ev_periodic and ev_timer) will not be affected. Setting this to a non-null value will introduce an additional ev_sleep () call into most loop iterations. The sleep time ensures that libev will not poll for I/O events more often then once per this interval, on average (as long as the host time resolution is good enough).
Likewise, by setting a higher timeout collect interval you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). ev_io watchers will not be affected. Setting this to a non-null value will not introduce any overhead in libev.
Many (busy) programs can usually benefit by setting the I/O collect interval to a value near 0.1 or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn't make much sense to set it to a lower value than 0.01, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can't increase the parallelity, then this setting will limit your transaction rate (if you need to poll once per transaction and the I/O collect interval is 0.01, then you can't do more than 100 transactions per second).
Setting the timeout collect interval can improve the opportunity for saving power, as the program will "bundle" timer callback invocations that are "near" in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake-ups is to use ev_periodic
watchers and make sure they fire on, say, one-second boundaries only.
Example: we only need 0.1s timeout granularity, and we wish not to poll more often than 100 times per second:
ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);
ev_invoke_pending (loop)
This call will simply invoke all pending watchers while resetting their pending state. Normally, ev_run does this automatically when required, but when overriding the invoke callback this call comes handy. This function can be invoked from a watcher - this can be useful for example when you want to do some lengthy calculation and want to pass further event handling to another thread (you still have to make sure only one thread executes within ev_invoke_pending or ev_run of course).
int ev_pending_count (loop)
返回掛起的觀察者的個數 - 零表示沒有觀察者掛起。
ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))
This overrides the invoke pending functionality of the loop: Instead of invoking all pending watchers when there are any, ev_run will call this callback instead. This is useful, for example, when you want to invoke the actual watchers inside another context (another thread etc.).
If you want to reset the callback, use ev_invoke_pending as new callback.
ev_set_loop_release_cb (loop, void (release)(EV_P) throw (), void (acquire)(EV_P) throw ())
Sometimes you want to share the same loop between multiple threads. This can be done relatively simply by putting mutex_lock/unlock calls around each call to a libev function.
However, ev_run can run an indefinite time, so it is not feasible to wait for it to return. One way around this is to wake up the event loop via ev_breakand ev_async_send, another way is to set these release and acquirecallbacks on the loop.
When set, then release will be called just before the thread is suspended waiting for new events, and acquire is called just afterwards.
Ideally, release will just call your mutex_unlock function, and acquire
will just call the mutex_lock function again.
While event loop modifications are allowed between invocations of release and acquire (that's their only purpose after all), no modifications done will affect the event loop, i.e. adding watchers will have no effect on the set of file descriptors being watched, or the time waited. Use an ev_async watcher to wake up ev_run when you want it to take note of any changes you made.
In theory, threads executing ev_run will be async-cancel safe between invocations of release and acquire.
See also the locking example in the THREADS section later in this document.
ev_set_userdata (loop, void *data)
void *ev_userdata (loop)
設置和提取與一個循環關聯的 void * 。當從來沒有調用過 ev_set_userdata
時,ev_userdata
返回 0。
這兩個函數可以用于把任意數據與 loop 關聯,并且僅用于invoke_pending_cb
, 釋放和獲取上面描述的回調,但是當然也可以被(濫)用于其它的一些目的。
ev_verify (loop)
This function only does something when EV_VERIFY
support has been compiled in, which is the default for non-minimal builds. It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call abort ()
.
This can be used to catch bugs inside libev itself: under normal circumstances, this function will never abort as of course libev keeps its data structures consistent.
觀察者解剖
在下面的描述中,名字里大寫的 TYPE
代表觀察者的類型,比如 ev_TYPE_start
可能意味著,對于定時器觀察者表示 ev_timer_start
及對于 I/O 觀察者表示 ev_io_start
。
觀察者是你分配并注冊來記錄你感興趣的一些事件的一個不透明的結構。為了創建一個具體的例子,想象你想要等待 STDIN 變得可讀,你將為其創建一個 ev_io
觀察者:
static void my_cb (struct ev_loop *loop, ev_io *w, int revents)
{
ev_io_stop (w);
ev_break (loop, EVBREAK_ALL);
}
struct ev_loop *loop = ev_default_loop (0);
ev_io stdin_watcher;
ev_init (&stdin_watcher, my_cb);
ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ);
ev_io_start (loop, &stdin_watcher);
ev_run (loop, 0);
如你所見,你負責為你的觀察者結構分配內存(在棧上分配內存 通常 都不是個好主意)。
每一個觀察者有一個與其關聯的觀察者結構(稱為 struct ev_TYPE
或簡稱為 ev_TYPE
, 如為所有的觀察者結構提供的 typedefs)。
每個觀察者結構必須通過調用 ev_init (watcher *, callback)
來初始化,這個調用需要傳入一個回調。每次在事件發生時,這個回調會被調到(或者在 I/O 觀察者的情況中,每次事件循環探測到給定的文件描述符可讀和/或可寫的時候)。
每個觀察者類型都還有它自己的 ev_TYPE_set (watcher *, ...)
宏來配置它,參數列表依賴于觀察者類型。還有一個宏在一個調用中結合了初始化和設置:ev_TYPE_init (watcher *, callback, ...)
。
為了使觀察者實際關注事件,你需要用一個觀察者特有的啟動函數
(ev_TYPE_start (loop, watcher *)
) 啟動它,你可以在任何時間通過調用對應的停止函數 (ev_TYPE_stop (loop, watcher *)
停止觀察事件。
只要你的觀察者處于活躍狀態(已經啟動但還沒有停止),你一定不能動它里面存儲的值。更具體地說,你一定不能重新初始化它,或調用它的 ev_TYPE_set
宏。
每個回調都接收 event loop 指針作為它的第一個參數,注冊的觀察者結構體為第二個,接收的事件的位集合為第三個參數。
接收的事件通常為每個接收的事件類型包含一個位(你可以在同一時間接收多個事件)。可能的位掩碼為:
EV_READ
EV_WRITE
ev_io
觀察者中的文件描述符已經變得可讀和/或可寫。
EV_TIMER
ev_timer
觀察者已經超時。
EV_PERIODIC
ev_periodic
觀察者已經超時。
EV_SIGNAL
ev_signal
觀察者中指定的信號已經由一個線程接收到。
EV_CHILD
ev_child
觀察者中指定的 pid 已經接收到一個狀態改變。
EV_STAT
ev_stat
觀察者中指定的路徑以某種方式改變了其屬性。
EV_IDLE
ev_idle
觀察者已經決定,你沒有其它更好的事情要做。
EV_PREPARE
EV_CHECK
所有的 ev_prepare
觀察者僅在 ev_run
開始收集新事件 之前
調用,而所有的 ev_check
觀察者僅在 ev_run
已經收集到了它們之后,但在任何接收到的事件的回調入隊之前,被加入隊列(而不是調用)。這意味著 ev_prepare
觀察者是在事件循環休眠或為新事件而 poll 之前最后被調用的觀察者,而 ev_check
觀察者將在一個事件循環迭代內任何其它相同或更低優先級的觀察者之前被調用。
這兩種觀察者類型的回調可以啟動或停止任何數量它們想要的觀察者,所有這些都將被考慮在內(比如,ev_prepare
觀察者可能啟動一個 idle 觀察者來保持
ev_run
不被阻塞)。
EV_EMBED
ev_embed
觀察者中指定的嵌入式事件循環需要注意。
EV_FORK
子線程中 fork 之后事件循環已經恢復(參考 ev_fork
)。
EV_CLEANUP
事件循環將被銷毀(參考 ev_cleanup
)。
EV_ASYNC
給定的 async 觀察者已經被異步地通知了(參考 ev_async
)。
EV_CUSTOM
不是 libev 自身發送(或另外使用)的事件,但可以被 libev 的用戶自由地用來通知觀察者(比如,通過 ev_feed_event
)。
EV_ERROR
發生未指定的錯誤,觀察者已被停止。這可能發生在由于 libev 內存不足而觀察者無法正常啟動,發現一個文件描述符已經關閉,或其它問題。Libev 認為這些是應用程序的錯誤。
You best act on it by reporting the problem and somehow coping with the watcher being stopped. Note that well-written programs should not receive an error ever, so when your watcher receives it, this usually indicates a bug in your program.
Libev will usually signal a few "dummy" events together with an error, for example it might indicate that a fd is readable or writable, and if your callbacks is well-written it can just attempt the operation and cope with the error from read() or write(). This will not work in multi-threaded programs, though, as the fd could already be closed and reused for another thing, so beware.
通用觀察者函數
ev_init (ev_TYPE *watcher, callback)
這個宏初始化觀察者的通用部分。觀察者對象的內容可以是任意的(所以 malloc
會做)。只有觀察者的通用部分被初始化,你 需要 在之后調用類型特有的 ev_TYPE_set
宏來初始化類型特有的部分。對于每一個類型,還有一個 ev_TYPE_init
可以把這兩個調用合為一個。
你可以在任何時間重新初始化一個觀察者,只要它已經停止(或從未啟動),且沒有掛起事件。
回調的類型總是 void (*)(struct ev_loop *loop, ev_TYPE *watcher, int revents)
。
示例:兩步初始化一個 ev_io
觀察者:
ev_io w;
ev_init (&w, my_cb);
ev_io_set (&w, STDIN_FILENO, EV_READ);
ev_TYPE_set (ev_TYPE *watcher, [args])
This macro initialises the type-specific parts of a watcher. You need to call ev_init at least once before you call this macro, but you can call ev_TYPE_set any number of times. You must not, however, call this macro on a watcher that is active (it can be pending, however, which is a difference to the ev_init macro).
Although some watcher types do not have type-specific arguments (e.g. ev_prepare) you still need to call its set macro.
參考 ev_init
,上面的例子。
ev_TYPE_init (ev_TYPE *watcher, callback, [args])
This convenience macro rolls both ev_init and ev_TYPE_set
macro calls into a single call. This is the most convenient method to initialise a watcher. The same limitations apply, of course.
示例:一步初始化并設置一個 ev_io
觀察者。
ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ);
ev_TYPE_start (loop, ev_TYPE *watcher)
啟動(激活)給定的觀察者。只有活躍的觀察者可以接收事件。如果觀察者已經處于活躍狀態,則什么也不做。
示例:啟動在這整個部分被濫用的 ev_io 監視器。
ev_io_start (EV_DEFAULT_UC, &w);
ev_TYPE_stop (loop, ev_TYPE *watcher)
如果處于活躍狀態就停止給定的觀察者,并清除 pending 狀態(觀察者是否處于活躍狀態)。
It is possible that stopped watchers are pending - for example, non-repeating timers are being stopped when they become pending - but calling ev_TYPE_stop ensures that the watcher is neither active nor pending. If you want to free or reuse the memory used by the watcher it is therefore a good idea to always call its ev_TYPE_stop
function.
bool ev_is_active (ev_TYPE *watcher)
Returns a true value iff the watcher is active (i.e. it has been started and not yet been stopped). As long as a watcher is active you must not modify it.
bool ev_is_pending (ev_TYPE *watcher)
Returns a true value iff the watcher is pending, (i.e. it has outstanding events but its callback has not yet been invoked). As long as a watcher is pending (but not active) you must not call an init function on it (but ev_TYPE_set is safe), you must not change its priority, and you must make sure the watcher is available to libev (e.g. you cannot free () it).
callback ev_cb (ev_TYPE *watcher)
返回當前設置的觀察者回調。
ev_set_cb (ev_TYPE *watcher, callback)
修改回調。你可以修改回調。您幾乎可以隨時修改回調(模塊化線程)。
ev_set_priority (ev_TYPE *watcher, int priority)
int ev_priority (ev_TYPE *watcher)
Set and query the priority of the watcher. The priority is a small integer between EV_MAXPRI (default: 2) and EV_MINPRI (default: -2). Pending watchers with higher priority will be invoked before watchers with lower priority, but priority will not keep watchers from being executed (except for ev_idle watchers).
If you need to suppress invocation when higher priority events are pending you need to look at ev_idle watchers, which provide this functionality.
You must not change the priority of a watcher as long as it is active or pending.
Setting a priority outside the range of EV_MINPRI to EV_MAXPRI
is fine, as long as you do not mind that the priority value you query might or might not have been clamped to the valid range.
The default priority used by watchers when no priority has been set is always 0, which is supposed to not be too high and not be too low :).
See "WATCHER PRIORITY MODELS", below, for a more thorough treatment of priorities.
ev_invoke (loop, ev_TYPE *watcher, int revents)
Invoke the watcher with the given loop and revents. Neither loop nor revents need to be valid as long as the watcher callback can deal with that fact, as both are simply passed through to the callback.
int ev_clear_pending (loop, ev_TYPE *watcher)
If the watcher is pending, this function clears its pending status and returns its revents bitset (as if its callback was invoked). If the watcher isn't pending it does nothing and returns 0.
Sometimes it can be useful to "poll" a watcher instead of waiting for its callback to be invoked, which can be accomplished with this function.
ev_feed_event (loop, ev_TYPE *watcher, int revents)
Feeds the given event set into the event loop, as if the specified event had happened for the specified watcher (which must be a pointer to an initialised but not necessarily started event watcher). Obviously you must not free the watcher as long as it has pending events.
Stopping the watcher, letting libev invoke it, or calling ev_clear_pending will clear the pending event, even if the watcher was not started in the first place.
See also ev_feed_fd_event and ev_feed_signal_event
for related functions that do not need a watcher.
觀察者狀態
這份手冊中提到了多種觀察者狀態 - active,pending 等等。本節將更詳細地描述這些狀態以及它們之間做轉換的規則 - 盡管這些規則可能看起來很復雜,但它們通常都 “做正確的事”。
initialised
在觀察者可以被注冊進事件循環之前,它必須處于initialised
狀態。這可以通過調用一次ev_TYPE_init
或先調用ev_init
然后調用觀察者特有的ev_TYPE_set
函數來完成,
處于這種狀態下,它僅僅是一些適合事件循環使用的內存塊。它可以根據需要被移動,釋放,復用等 - 只要你保持內存內容不變,或者再次調用ev_TYPE_init
。started/running/active
一旦觀察者已經通過調用ev_TYPE_start
啟動了,則它就變成了事件循環的屬性,并活躍地等待事件。盡管在這種狀態下它不能被訪問(除了一些文檔化的方式),移動,釋放或其它操作 - 僅有的合法的事情是持有一個指向它的指針,并調用一些允許在活躍的觀察者上調用的 libev 函數。pending
If a watcher is active and libev determines that an event it is interested in has occurred (such as a timer expiring), it will become pending. It will stay in this pending state until either it is stopped or its callback is about to be invoked, so it is not normally pending inside the watcher callback.
The watcher might or might not be active while it is pending (for example, an expired non-repeating timer can be pending but no longer active). If it is stopped, it can be freely accessed (e.g. by calling ev_TYPE_set), but it is still property of the event loop at this time, so cannot be moved, freed or reused. And if it is active the rules described in the previous item still apply.
It is also possible to feed an event on a watcher that is not active (e.g. via ev_feed_event), in which case it becomes pending without being active.
-
stopped
觀察者可被 libev 隱式地停止(在這種情況中它可能依然處于掛起狀態),或通過調用它的ev_TYPE_stop
函數顯式地停止。后者將清除任何觀察者可能處于的掛起狀態,無論它是否處于活躍狀態,因此在釋放一個觀察者前顯式地停止它常常是一個好主意。
停止的(不是掛起)觀察者本質上是處于初始化狀態的,即它可以以任何你希望的方式復用,移動,修改(但當你廢棄了內存塊時,你需要再次ev_TYPE_init
它)。
觀察者優先級模型
許多事件循環都支持 觀察者優先級,這通常是一個以某些方式影響事件回調間調用順序的小整數,其它條件一樣。
在 libev 中,觀察者優先級可以使用 ev_set_priority
進行設置。參考它的描述獲得更多技術細節,比如實際的優先級范圍。
事件循環如何解釋這些優先級,有兩種常見的方式。
在更常見的鎖定模型中,更高優先級的 “鎖定” 對更低優先級的觀察者的調用,這意味著一旦更高優先級的觀察者收到事件,更低優先級的觀察者將不會被調用。
The less common only-for-ordering model uses priorities solely to order callback invocation within a single event loop iteration: Higher priority watchers are invoked before lower priority ones, but they all get invoked before polling for new events.
Libev uses the second (only-for-ordering) model for all its watchers except for idle watchers (which use the lock-out model).
The rationale behind this is that implementing the lock-out model for watchers is not well supported by most kernel interfaces, and most event libraries will just poll for the same events again and again as long as their callbacks have not been executed, which is very inefficient in the common case of one high-priority watcher locking out a mass of lower priority ones.
Static (ordering) priorities are most useful when you have two or more watchers handling the same resource: a typical usage example is having an ev_io watcher to receive data, and an associated ev_timer to handle timeouts. Under load, data might be received while the program handles other jobs, but since timers normally get invoked first, the timeout handler will be executed before checking for data. In that case, giving the timer a lower priority than the I/O watcher ensures that I/O will be handled first even under adverse conditions (which is usually, but not always, what you want).
Since idle watchers use the "lock-out" model, meaning that idle watchers will only be executed when no same or higher priority watchers have received events, they can be used to implement the "lock-out" model when required.
For example, to emulate how many other event libraries handle priorities, you can associate an ev_idle watcher to each such watcher, and in the normal watcher callback, you just start the idle watcher. The real processing is done in the idle watcher callback. This causes libev to continuously poll and process kernel event data for the watcher, but when the lock-out case is known to be rare (which in turn is rare :), this is workable.
Usually, however, the lock-out model implemented that way will perform miserably under the type of load it was designed to handle. In that case, it might be preferable to stop the real watcher before starting the idle watcher, so the kernel will not have to process the event in case the actual processing will be delayed for considerable time.
Here is an example of an I/O watcher that should run at a strictly lower priority than the default, and which should only process data when no other events are pending:
ev_idle idle; // actual processing watcher
ev_io io; // actual event watcher
static void
io_cb (EV_P_ ev_io *w, int revents)
{
// stop the I/O watcher, we received the event, but
// are not yet ready to handle it.
ev_io_stop (EV_A_ w);
// start the idle watcher to handle the actual event.
// it will not be executed as long as other watchers
// with the default priority are receiving events.
ev_idle_start (EV_A_ &idle);
}
static void
idle_cb (EV_P_ ev_idle *w, int revents)
{
// actual processing
read (STDIN_FILENO, ...);
// have to start the I/O watcher again, as
// we have handled the event
ev_io_start (EV_P_ &io);
}
// initialisation
ev_idle_init (&idle, idle_cb);
ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ);
ev_io_start (EV_DEFAULT_ &io);
In the "real" world, it might also be beneficial to start a timer, so that low-priority connections can not be locked out forever under load. This enables your program to keep a lower latency for important connections during short periods of high load, while not completely locking out less important ones.
Done.