1.需求描述
本文分析利用DC員工數(shù)據(jù)進(jìn)行分析。在對(duì)離職率的影響因素進(jìn)行觀察的基礎(chǔ)至上,建立模型并預(yù)測(cè)哪些員工更易離職。
2.數(shù)據(jù)集描述
DC員工數(shù)據(jù)集共有31個(gè)變量,1100個(gè)觀測(cè)量。部分重點(diǎn)關(guān)注變量描述如下:
員工特征可以分為以下幾類
- 基本的身份信息變量:性別、年齡、學(xué)歷、婚姻狀況、教育程度、就讀專業(yè);
- 員工公司身份變量:工齡、司齡(在公司工作的時(shí)間)、職位、職級(jí)、任職過的企業(yè)數(shù)量、所在部門、商務(wù)差旅情況、和目前管理者共事年數(shù)
- 薪酬與福利變量:月薪、工作投入、是否加班、績(jī)效評(píng)分、認(rèn)購(gòu)優(yōu)先股的級(jí)別、漲薪比列、上年度培訓(xùn)次數(shù)、距離上次升職的時(shí)間間隔
- 生活質(zhì)量相關(guān)變量:工作環(huán)境滿意度、工作滿意度、關(guān)系滿意度、工作與生活平衡情況、上班距離
3.特征分析
3.1 統(tǒng)計(jì)分析
-
員工基礎(chǔ)信息描述
- 員工員工平均年齡約37歲,最大的60歲,最小的18歲;
- 全部1100名員工中,離職178人,離職率16.2%;
- 員工平均收入6483.6,中位收入4857.0,最小1009,最大19999;
- 員工中男性為653人、女性為447人;男性離職率為61.2%、女性離職率為38.8%
3.2 分布分析
通過對(duì)部分變量觀察分析,發(fā)現(xiàn)以下問題:
-
18歲-23歲員工離職率超過40%,符合當(dāng)前市場(chǎng)對(duì)95后司齡的判斷,25歲以后員工穩(wěn)定性趨于穩(wěn)定,維持在20%-40%之間
-
研發(fā)部門離職人數(shù)最多,這主要是因?yàn)樵摴狙邪l(fā)部門人數(shù)最多的原因,雖然人數(shù)多,但是研發(fā)部門離職率最低,離職率最高的部門是HR,該部門也是公司人數(shù)最少的部門(基數(shù)比較小)
-員工離職率和薪資漲幅比呈現(xiàn)負(fù)相關(guān),當(dāng)漲幅比最大時(shí)員工離職率超過40%,有點(diǎn)奇怪,如果分析沒有錯(cuò)誤那就說明該部分員工薪資基數(shù)較小故而導(dǎo)致離職率很高
PercentSalaryHike&MonthlyIncome.png -
工作投入度等級(jí)為1的員工離職率有近40%,達(dá)到了38%!!!一分耕耘一份收獲,若未付出何談喜歡?
JobInvolvement.png -
員工加班的離職率是不加班離職率的3倍!!!彈性工作和不加班為什么會(huì)成為一種福利待遇
員工在公司供職時(shí)間越長(zhǎng)穩(wěn)定性越好,員工司齡在0年-2年內(nèi)離職率最高為35.8%,20年-25年會(huì)有小幅增長(zhǎng)(有可能會(huì)創(chuàng)業(yè)或者升職無望),31年-32年離職率會(huì)上升達(dá)到33%根據(jù)一般就職年限計(jì)算員工在未被企業(yè)返聘的情況下應(yīng)該已退休
-
工作環(huán)境和工作滿意度體現(xiàn):工作滿意度比較低、環(huán)境滿意度低都會(huì)導(dǎo)致員工流失率增加
4.特征選擇
4.1數(shù)據(jù)預(yù)處理
-
對(duì)特征類別進(jìn)行分類:數(shù)值類和非數(shù)值類
def obtain_x(train_df,xtype): dtype_df = train_df.dtypes.reset_index() dtype_df.columns = ['col','type'] return dtype_df[dtype_df.type != xtype].col.values float64_col = obtain_x(train_local,'object') #float64
-
加載數(shù)據(jù)進(jìn)行歸一化處理
min_max_scaler = preprocessing.MinMaxScaler() X = min_max_scaler.fit_transform(X)
-
加載數(shù)據(jù)對(duì)類別數(shù)據(jù)進(jìn)行編碼處理
le = preprocessing.LabelEncoder() ohe = preprocessing.OneHotEncoder() train = pd.DataFrame() local_all = pd.concat([train_local,test_local],axis=0) for col in object_col: le.fit(local_all[col]) local_all[col] = le.transform(local_all[col]) ohe.fit(local_all[col].reshape(-1, 1)) ohecol = ohe.transform(local_all[col].reshape(-1, 1)).toarray() ohecol = pd.DataFrame(ohecol[:,1:],index=None)#columns=col+le.classes_ ohecol.columns = ohecol.columns.map(lambda x:str(x)+col) train = pd.concat([train2, ohecol],axis=1, ignore_index=False)
4.2 計(jì)算相關(guān)系數(shù)
-
計(jì)算各變量對(duì)離職率的影響程度
print(train.corr()['Attrition']) x = train.corr()['Attrition'].index y = train.corr()['Attrition'].values plt.figure(figsize=(12,5)) plt.plot(x,y, 'b.-',label="皮爾遜相關(guān)系數(shù)") plt.legend() plt.ylabel("皮爾遜相關(guān)系數(shù)") plt.xlabel("變量名稱") plt.xticks(rotation=45) plt.grid() for i,j in zip(x,y): plt.text(i,j+0.005,'%.2f' %(j),ha='center',va='bottom') ` ``
4.3 利用特征的統(tǒng)計(jì)信息進(jìn)行特征選擇
-
移除低方差特征
from sklearn.feature_selection import VarianceThreshold sel = VarianceThreshold() train_all_sel = sel.fit_transform(train_all) train_local4 = train_all.iloc[0:len(train_local)] test_local4 = train_all.iloc[len(train_local):] # 定義特征統(tǒng)計(jì)函數(shù) def col_Sta(train_df): col_sta = pd.DataFrame(columns = ['_columns','_min','_max','_median','_mean','_ptp','_std','_var']) for col in train_df.columns: col_sta = col_sta.append({'_columns':col,'_min':np.min(train_df[col]),'_max':np.max(train_df[col]),\ '_median':np.median(train_df[col]),'_mean':np.mean(train_df[col]),\ '_ptp':np.ptp(train_df[col]),'_std':np.std(train_df[col]),'_var':np.var(train_df[col])},ignore_index=True) return col_sta train_col_sta = col_Sta(train_local4) test_col_sta = col_Sta(test_local4) train_col_sta
-
計(jì)算卡方信息量
from sklearn.feature_selection import chi2#調(diào)用卡方信息量 chi2_col,pval_col = chi2(train_local4[feature_end].values,train_y)
-
計(jì)算F信息量
from sklearn.feature_selection import f_classif#調(diào)用f信息量 f_col,pval_col = f_classif(train_local4[feature_end].values,train_y)
-
計(jì)算互信息量
from sklearn.feature_selection import mutual_info_classif#調(diào)用互信息 mic_col = mutual_info_classif(train_local4[feature_end].values,train_y)
5 建模
5.1 邏輯回歸預(yù)測(cè)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
train_x_0 = train_local[['EnvironmentSatisfaction','JobInvolvement','JobSatisfaction','YearsSinceLastPromotion']]#RelationshipSatisfaction
train_y_0 = train_local[y_col]
clf = LogisticRegression(C=10)
clf.fit(train_x_0, train_y_0)
scores = cross_val_score(clf, train_x_0, train_y_0)
print(scores.mean()) #0.8370006974003799
x_col = [x for x in float64_col if x not in ['Attrition']]
len(x_col)
y_col = 'Attrition'
train_x = train_local[x_col]
train_y = train_local[y_col]
clf = LogisticRegression(C=10)
clf.fit(train_x, train_y)
scores = cross_val_score(clf, train_x, train_y)
print(scores.mean()) #0.8469807373205397
5.2 隨機(jī)森林預(yù)測(cè)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, max_depth=5,min_samples_split=2, random_state=0)
clf = clf.fit(train_local4[feature_end], train_y)
scores = cross_val_score(clf, train_local4[feature_end], train_y)
print(scores.mean()) # 0.8470288338984681
5.3 組合篩選創(chuàng)建模型
feature_a = feature_sel[(feature_sel['importance']>=0.005)].feature.values
clf = LogisticRegression(C=10)
clf.fit(train_x3[feature_a], train_y)
scores = cross_val_score(clf, train_x3[feature_a], train_y)
print(scores.mean()) #0.8729528894019191
5.4混淆矩陣評(píng)估模型
TP(真實(shí)為1): 7
FP(真實(shí)為0): 3
TN(真實(shí)為0): 83
FN(真實(shí)為1): 7
P: 0.7
R: 0.5
ACC: 0.9
發(fā)現(xiàn)準(zhǔn)確率和召回率都是很很滿意,模型預(yù)測(cè)準(zhǔn)確度也很不錯(cuò)