10X單細胞(10X空間轉錄組)TCR數據分析之TCRdist3(4)

前面已經說了很多的基礎概念和分析算法,但是大家應該注意到了,前面的分析都限制在TCRαβ序列,分析還是具有一定的限制性,今天我們稍微做一下擴展,優化TCR分析距離的同時,分析γδ TCRs,and at-scale computation with sparse data representations and parallelized, byte-compiled code.文獻在TCR meta-clonotypes for biomarker discovery with tcrdist3: identification of public, HLA restricted SARS-CoV-2 associated TCR features,這些文獻都是具有承前啟后的作用,這個專題,內容真的太多了。

來,看看分析框架

1、實驗性抗原富集可以發現具有生化相似neighbors的 TCR

Searching for identical TCRs within a repertoire - arising either from clonal expansion or convergent nucleotide encoding of amino acids in the CDR3 - is a common strategy for identifying functionally important receptors。(這也是唯一實用的策略),然而,在缺乏實驗富集程序的情況下,在大量樣本中觀察到具有相同氨基酸 TCR 序列的 T 細胞是很少見的。例如,在來自臍帶血樣本的 10,000 個 β 鏈 TCR 中,少于 1% 的 TCR 氨基酸序列被多次觀察到,包括可能的克隆擴增(疾病確實會導致TCR的特異性擴增,這是研究的核心).
圖片.png
  • 圖注:TCR repertoire subsets obtained by single-cell sorting with peptide-MHC tetramers。

2、TCR biochemical neighborhood density is heterogeneous in antigen-enriched repertoires

We next investigated the proportion of unique TCRs with at least one biochemically similar neighbor among TCRs with the same putative antigen specificity.We and others have shown that a single peptide-MHC epitope is often recognized by many distinct TCRs with closely related amino acid sequences(識別抗原的TCR序列具有多樣性,多對一的關系,這就復雜了),這個時候就必須找序列之間的相似性(也就是前面提到的TCR instance),以尋求共性。We observed the highest density neighborhoods within repertoires that were sorted based on peptide-MHC tetramer binding(看來刺激的作用很明顯)。these observations suggest that biochemical neighborhood density is highly heterogeneous among TCRs and that it may depend on mechanisms of antigen-recognition as well as receptor V(D)J recombination biases。(按照這個情況,這個難以研究)。

3、Meta-clonotype radius can be tuned to balance a biomarker’s sensitivity and specificity

基于 TCR 的生物標志物的效用取決于 TCR 的抗原特異性 ,a key constraint on distance-based clustering is the presence of similar TCR sequences that may lack the ability to recognize the target antigen.(說白了就行要定義相似性的半徑),To be useful, a meta-clonotype definition should be broad enough to capture multiple biochemically similar TCRs with shared antigen-recognition, but not excessively broad as to include a high proportion of non-specific TCRs, which might be found in unenriched background repertoires that are largely antigen-na?ve(半徑的大小要合適)。但是TCR“鄰居”的相似性密度是異質的。
An ideal radius-defined meta-clonotype would include a high density of TCRs in antigen experienced individuals indicative of shared antigen specificity, yet a low density of TCRs among an antigen-na?ve background.接下來就是尋找抗原轉移性的TCR序列了。我們來看看分析的代碼(TCRdist3)。

第一部分代碼,TCRdist

看看輸入的數據格式,跟我們10X分析出來的結果很類似
圖片.png
來,現場教大家寫代碼
103338268-aa3ee180-4a32-11eb-8149-056fb385b33b.gif
默認參數
"""
If you just want a 'tcrdistances' using pre-set default setting.

    You can access distance matrices:
        tr.pw_alpha     - alpha chain pairwise distance matrix
        tr.pw_beta      - alpha chain pairwise distance matrix
        tr.pw_cdr3_a_aa - cdr3 alpha chain distance matrix
        tr.pw_cdr3_b_aa - cdr3 beta chain distance matrix
"""
import pandas as pd
from tcrdist.repertoire import TCRrep

df = pd.read_csv("dash.csv")
tr = TCRrep(cell_df = df, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            db_file = 'alphabeta_gammadelta_db.tsv')

tr.pw_alpha
tr.pw_beta
tr.pw_cdr3_a_aa
tr.pw_cdr3_b_aa

調整一個默認參數
"""  
If you want 'tcrdistances' with changes over some parameters.
For instance you want to change the gap penalty on CDR3s to 5. 
"""
import pwseqdist as pw
import pandas as pd
from tcrdist.repertoire import TCRrep

df = pd.read_csv("dash.csv")
tr = TCRrep(cell_df = df, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            compute_distances = False,
            db_file = 'alphabeta_gammadelta_db.tsv')

tr.kargs_a['cdr3_a_aa']['gap_penalty'] = 5 
tr.kargs_b['cdr3_b_aa']['gap_penalty'] = 5 

tr.compute_distances()

tr.pw_alpha
tr.pw_beta
人為完全控制距離的計算(對代碼的水平要求有點高)
"""
If want a 'tcrdistances' AND you want control over EVERY parameter.
"""
import pwseqdist as pw
import pandas as pd
from tcrdist.repertoire import TCRrep

df = pd.read_csv("dash.csv")
tr = TCRrep(cell_df = df, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            compute_distances = False,
            db_file = 'alphabeta_gammadelta_db.tsv')

metrics_a = {
    "cdr3_a_aa" : pw.metrics.nb_vector_tcrdist,
    "pmhc_a_aa" : pw.metrics.nb_vector_tcrdist,
    "cdr2_a_aa" : pw.metrics.nb_vector_tcrdist,
    "cdr1_a_aa" : pw.metrics.nb_vector_tcrdist}

metrics_b = {
    "cdr3_b_aa" : pw.metrics.nb_vector_tcrdist,
    "pmhc_b_aa" : pw.metrics.nb_vector_tcrdist,
    "cdr2_b_aa" : pw.metrics.nb_vector_tcrdist,
    "cdr1_b_aa" : pw.metrics.nb_vector_tcrdist }

weights_a= { 
    "cdr3_a_aa" : 3,
    "pmhc_a_aa" : 1,
    "cdr2_a_aa" : 1,
    "cdr1_a_aa" : 1}

weights_b = { 
    "cdr3_b_aa" : 3,
    "pmhc_b_aa" : 1,
    "cdr2_b_aa" : 1,
    "cdr1_b_aa" : 1}

kargs_a = {  
    'cdr3_a_aa' : 
        {'use_numba': True, 
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix, 
        'dist_weight': 1, 
        'gap_penalty':4, 
        'ntrim':3, 
        'ctrim':2, 
        'fixed_gappos': False},
    'pmhc_a_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight':1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True},
    'cdr2_a_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight': 1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True},
    'cdr1_a_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight':1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True}
    }
kargs_b= {  
    'cdr3_b_aa' : 
        {'use_numba': True, 
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix, 
        'dist_weight': 1, 
        'gap_penalty':4, 
        'ntrim':3, 
        'ctrim':2, 
        'fixed_gappos': False},
    'pmhc_b_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight': 1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True},
    'cdr2_b_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight':1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True},
    'cdr1_b_aa' : {
        'use_numba': True,
        'distance_matrix': pw.matrices.tcr_nb_distance_matrix,
        'dist_weight':1,
        'gap_penalty':4,
        'ntrim':0,
        'ctrim':0,
        'fixed_gappos':True}
    }   

tr.metrics_a = metrics_a
tr.metrics_b = metrics_b

tr.weights_a = weights_a
tr.weights_b = weights_b

tr.kargs_a = kargs_a 
tr.kargs_b = kargs_b
只考慮不匹配的計算
"""
If you want "tcrdistances" using a different metric.

Here we illustrate the use a metric that uses the 
Needleman-Wunsch algorithm to align sequences and then 
calculate the number of mismatching positions (pw.metrics.nw_hamming_metric)

This method doesn't rely on Numba so it can run faster using multiple cpus.
"""
import pwseqdist as pw
import pandas as pd
from tcrdist.repertoire import TCRrep
import multiprocessing

df = pd.read_csv("dash.csv")
df = df.head(100) # for faster testing
tr = TCRrep(cell_df = df, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            use_defaults=False,
            compute_distances = False,
            cpus = 1,
            db_file = 'alphabeta_gammadelta_db.tsv')

metrics_a = {
    "cdr3_a_aa" : pw.metrics.nw_hamming_metric ,
    "pmhc_a_aa" : pw.metrics.nw_hamming_metric ,
    "cdr2_a_aa" : pw.metrics.nw_hamming_metric ,
    "cdr1_a_aa" : pw.metrics.nw_hamming_metric }

metrics_b = {
    "cdr3_b_aa" : pw.metrics.nw_hamming_metric ,
    "pmhc_b_aa" : pw.metrics.nw_hamming_metric ,
    "cdr2_b_aa" : pw.metrics.nw_hamming_metric ,
    "cdr1_b_aa" : pw.metrics.nw_hamming_metric  }

weights_a = { 
    "cdr3_a_aa" : 1,
    "pmhc_a_aa" : 1,
    "cdr2_a_aa" : 1,
    "cdr1_a_aa" : 1}

weights_b = { 
    "cdr3_b_aa" : 1,
    "pmhc_b_aa" : 1,
    "cdr2_b_aa" : 1,
    "cdr1_b_aa" : 1}

kargs_a = {  
    'cdr3_a_aa' : 
        {'use_numba': False},
    'pmhc_a_aa' : {
        'use_numba': False},
    'cdr2_a_aa' : {
        'use_numba': False},
    'cdr1_a_aa' : {
        'use_numba': False}
    }
kargs_b = {  
    'cdr3_b_aa' : 
        {'use_numba': False},
    'pmhc_b_aa' : {
        'use_numba': False},
    'cdr2_b_aa' : {
        'use_numba': False},
    'cdr1_b_aa' : {
        'use_numba': False}
    }

tr.metrics_a = metrics_a
tr.metrics_b = metrics_b

tr.weights_a = weights_a
tr.weights_b = weights_b

tr.kargs_a = kargs_a 
tr.kargs_b = kargs_b

tr.compute_distances()

tr.pw_cdr3_b_aa
tr.pw_beta
自定義距離度量
"""
If you want a tcrdistance, but you want to use your own metric. 
(A valid metric takes two strings and returns a numerical distance).  

def my_own_metric(s1,s2):   
    return Levenshtein.distance(s1,s2)
"""
import pwseqdist as pw
import pandas as pd
from tcrdist.repertoire import TCRrep
import multiprocessing

df = pd.read_csv("dash.csv")
df = df.head(100) # for faster testing
tr = TCRrep(cell_df = df, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            use_defaults=False,
            compute_distances = False,
            cpus = 1,
            db_file = 'alphabeta_gammadelta_db.tsv')

metrics_a = {
    "cdr3_a_aa" : my_own_metric ,
    "pmhc_a_aa" : my_own_metric ,
    "cdr2_a_aa" : my_own_metric ,
    "cdr1_a_aa" : my_own_metric }

metrics_b = {
    "cdr3_b_aa" : my_own_metric ,
    "pmhc_b_aa" : my_own_metric ,
    "cdr2_b_aa" : my_own_metric,
    "cdr1_b_aa" : my_own_metric }

weights_a = { 
    "cdr3_a_aa" : 1,
    "pmhc_a_aa" : 1,
    "cdr2_a_aa" : 1,
    "cdr1_a_aa" : 1}

weights_b = { 
    "cdr3_b_aa" : 1,
    "pmhc_b_aa" : 1,
    "cdr2_b_aa" : 1,
    "cdr1_b_aa" : 1}

kargs_a = {  
    'cdr3_a_aa' : 
        {'use_numba': False},
    'pmhc_a_aa' : {
        'use_numba': False},
    'cdr2_a_aa' : {
        'use_numba': False},
    'cdr1_a_aa' : {
        'use_numba': False}
    }
kargs_b = {  
    'cdr3_b_aa' : 
        {'use_numba': False},
    'pmhc_b_aa' : {
        'use_numba': False},
    'cdr2_b_aa' : {
        'use_numba': False},
    'cdr1_b_aa' : {
        'use_numba': False}
    }

tr.metrics_a = metrics_a
tr.metrics_b = metrics_b

tr.weights_a = weights_a
tr.weights_b = weights_b

tr.kargs_a = kargs_a 
tr.kargs_b = kargs_b

tr.compute_distances()

tr.pw_cdr3_b_aa
tr.pw_beta
I want tcrdistances, but I hate OOP
"""
If you don't want to use OOP, but you I still want a multi-CDR 
tcrdistances on a single chain, using you own metric 

def my_own_metric(s1,s2):   
    return Levenshtein.distance(s1,s2)    
"""
import multiprocessing
import pandas as pd
from tcrdist.rep_funcs import _pws, _pw

df = pd.read_csv("dash2.csv")

metrics_b = {
    "cdr3_b_aa" : my_own_metric ,
    "pmhc_b_aa" : my_own_metric ,
    "cdr2_b_aa" : my_own_metric ,
    "cdr1_b_aa" : my_own_metric }

weights_b = { 
    "cdr3_b_aa" : 1,
    "pmhc_b_aa" : 1,
    "cdr2_b_aa" : 1,
    "cdr1_b_aa" : 1}

kargs_b = {  
    'cdr3_b_aa' : 
        {'use_numba': False},
    'pmhc_b_aa' : {
        'use_numba': False},
    'cdr2_b_aa' : {
        'use_numba': False},
    'cdr1_b_aa' : {
        'use_numba': False}
    }

dmats =  _pws(df = df , 
            metrics = metrics_b, 
            weights = weights_b, 
            kargs   = kargs_b , 
            cpu     = 1, 
            uniquify= True, 
            store   = True)

print(dmats.keys())
僅考慮CDR3
"""
If you hate object oriented programming, just show me the functions. 
No problem. 

Maybe you only care about the CDR3 on the beta chain.

def my_own_metric(s1,s2):   
    return Levenshtein.distance(s1,s2)
"""  
import multiprocessing
import pandas as pd
from tcrdist.rep_funcs import _pws, _pw

df = pd.read_csv("dash2.csv")

# 
dmat = _pw( metric = my_own_metric,
            seqs1 = df['cdr3_b_aa'].values,
            ncpus=2,
            uniqify=True,
            use_numba=False)
I want tcrdistances but I want to keep my variable names
"""
You want a 'tcrdistance' but you don't want to bother with the tcrdist3 framework. 

Note that the columns names are completely arbitrary under this 
framework, so one can directly compute a tcrdist on a 
AIRR, MIXCR, VDJTools, or other formated file without any
reformatting.
""" 
import multiprocessing
import pandas as pd
import pwseqdist as pw
from tcrdist.rep_funcs import _pws, _pw  

df_airr = pd.read_csv("dash_beta_airr.csv")

# Choose the metrics you want to apply to each CDR
metrics = { 'cdr3_aa' : pw.metrics.nb_vector_tcrdist,
            'cdr2_aa' : pw.metrics.nb_vector_tcrdist,
            'cdr1_aa' : pw.metrics.nb_vector_tcrdist}

# Choose the weights that are right for you.
weights = { 'cdr3_aa' : 3,
            'cdr2_aa' : 1,
            'cdr1_aa' : 1}

# Provide arguments for the distance metrics 
kargs = {   'cdr3_aa' : {'use_numba': True, 'distance_matrix': pw.matrices.tcr_nb_distance_matrix, 'dist_weight': 1, 'gap_penalty':4, 'ntrim':3, 'ctrim':2, 'fixed_gappos':False},
            'cdr2_aa' : {'use_numba': True, 'distance_matrix': pw.matrices.tcr_nb_distance_matrix, 'dist_weight': 1, 'gap_penalty':4, 'ntrim':0, 'ctrim':0, 'fixed_gappos':True},
            'cdr1_aa' : {'use_numba': True, 'distance_matrix': pw.matrices.tcr_nb_distance_matrix, 'dist_weight': 1, 'gap_penalty':4, 'ntrim':0, 'ctrim':0, 'fixed_gappos':True}}
            
# Here are your distance matrices
from tcrdist.rep_funcs import _pws

dmats = _pws(df = df_airr,
         metrics = metrics, 
         weights= weights, 
         kargs=kargs, 
         cpu = 1, 
         store = True)

dmats['tcrdist']
I want to use TCRrep but I want to keep my variable names
"""
If you already have a clones file and want 
to compute 'tcrdistances' on a DataFrame with 
custom columns names.

Set:
1. Assign TCRrep.clone_df
2. set infer_cdrs = False,
3. compute_distances = False
4. deduplicate = False
5. customize the keys for metrics, weights, and kargs with the lambda
    customize = lambda d : {new_cols[k]:v for k,v in d.items()} 
6. call .calculate_distances()
"""
import pwseqdist as pw
import pandas as pd
from tcrdist.repertoire import TCRrep

new_cols = {'cdr3_a_aa':'c3a', 'pmhc_a_aa':'pa', 'cdr2_a_aa':'c2a','cdr1_a_aa':'c1a',
            'cdr3_b_aa':'c3b', 'pmhc_b_aa':'pb', 'cdr2_b_aa':'c2b','cdr1_b_aa':'c1b'}

df = pd.read_csv("dash2.csv").rename(columns = new_cols) 

tr = TCRrep(
        cell_df = df,
        clone_df = df,              #(1)
        organism = 'mouse', 
        chains = ['alpha','beta'],
        infer_all_genes = True, 
        infer_cdrs = False,         #(2)s
        compute_distances = False,  #(3)
        deduplicate=False,          #(4)
        db_file = 'alphabeta_gammadelta_db.tsv')

customize = lambda d : {new_cols[k]:v for k,v in d.items()} #(5)
tr.metrics_a = customize(tr.metrics_a)
tr.metrics_b = customize(tr.metrics_b)
tr.weights_a = customize(tr.weights_a)
tr.weights_b = customize(tr.weights_b)
tr.kargs_a = customize(tr.kargs_a)
tr.kargs_b = customize(tr.kargs_b)

tr.compute_distances() #(6)

# Notice that pairwise results now have custom names 
tr.pw_c3b
tr.pw_c3a
tr.pw_alpha
tr.pw_beta

####### I want distances from 1 TCR to many TCRs

"""
If you just want a 'tcrdistances' of some target seqs against another set.

(1) cell_df is asigned the first 10 cells in dash.csv
(2) compute tcrdistances with default settings.
(3) compute rectangular distance between clone_df and df2.
(4) compute rectangular distance between clone_df and any 
arbtirary df3, which need not be associated with the TCRrep object.
(5) compute rectangular distance with only a subset of the TCRrep.clone_df
"""
import pandas as pd
from tcrdist.repertoire import TCRrep

df = pd.read_csv("dash.csv")
df2 = pd.read_csv("dash2.csv")
df = df.head(10)                        #(1)
tr = TCRrep(cell_df = df,               #(2)
            df2 = df2, 
            organism = 'mouse', 
            chains = ['alpha','beta'], 
            db_file = 'alphabeta_gammadelta_db.tsv')

assert tr.pw_alpha.shape == (10,10) 
assert tr.pw_beta.shape  == (10,10)

tr.compute_rect_distances()             # (3) 
assert tr.rw_alpha.shape == (10,1924) 
assert tr.rw_beta.shape  == (10,1924)

df3 = df2.head(100)

tr.compute_rect_distances(df = tr.clone_df, df2 = df3)  # (4) 
assert tr.rw_alpha.shape == (10,100) 
assert tr.rw_beta.shape  == (10,100)

tr.compute_rect_distances(  df = tr.clone_df.iloc[0:2,], # (5)
                            df2 = df3)  
assert tr.rw_alpha.shape == (2,100) 
assert tr.rw_beta.shape  == (2,100)

個性化程度真的高,也確實很難

生活很好,有你更好,下一篇我們繼續分享TCRdist3的分析代碼

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
禁止轉載,如需轉載請通過簡信或評論聯系作者。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 227,882評論 6 531
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,208評論 3 414
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事?!?“怎么了?”我有些...
    開封第一講書人閱讀 175,746評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,666評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,477評論 6 407
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 54,960評論 1 321
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,047評論 3 440
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,200評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,726評論 1 333
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,617評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,807評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,327評論 5 358
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,049評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,425評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,674評論 1 281
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,432評論 3 390
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,769評論 2 372

推薦閱讀更多精彩內容