- 数据处理注意事项:
- 数据类型:
- patient_id(患者id),case_no(住院记录id),这些id数据类型在读入时可能为int或float,造成merge无法匹配,应在读入时设置为dtype={'patient_id': str};
- age、weight等str类型需要转换为float类型,计算BMI;value=round(value,2),保留2为小数
- start_datetime或end_datetime等str还是timedelta时间戳类型,因为str无法进行时间加减datetime.timedelta(days=7)
- 时间格式:规范化为2018-01-01 18:46:23,而不是13/09/2018 18:46:23,因为python sort_values()方法按第一个数排序,会把12/04排在22/02前面!
- 明确数据对应关系:选择纳排基准(patient_id或case_no),合并数据时,要明确id对应的用药、住院记录关系
- 一对一。一个患者对应一个id
- 一对多。一个患者可能对应多条住院记录case_no;一条住院记录可能对应多条用药记录
- 按出院日剂量分组时。虽然病人可能存在多次入院,多次出院时剂量改变,但我们要研究他再次入院的话,只能以他第一次出院日剂量作为分组标准,分析他再次入院,否则无法明确分析不同日剂量组别的入院差异。因为他再次入院的记录可能按日剂量分到其他组了,这导致我们无法检测数再次入院。
- 操作DataFrame数据之前:
- 删除空值
- 删除重复
- 删除异常值:文字、过大值(绝对值大于中位数100倍)
- 排序
- 保存DataFrame数据之前:
- 排序
- 重置索引。df=df.reset_index(drop=True)
- 输出数据统计。print(df.shape); print(df['patient_id'].nunique()); print(df['case_no'].nunique())
目录
Medical DM数据处理流程:
1. 原始数据raw_data预处理
导入packages和自定义函数
1.1 用药原始数据doctor_order预处理
1.2 诊断原始数据diagnostic预处理
1.3 检验原始数据test_record+test_result预处理
2. 纳排 纳排基准patient_id,case_no
纳入:提取服用利伐沙班的患者
纳入: 提取出院诊断房颤患者
合并利伐沙班用药和出院房颤诊断
排除:膜瓣置换手术
排除:诊断中瓣膜性房颤
3. 计算利伐沙班用药日剂量
TDM检测信息
合并用药和tdm检测
tdm检测前7天内有他克莫司用药
同一病人相邻两次TDM检测间隔15天判断
4. 合并人口信息学数据
4.1 合并人口学特征
4.2 补充缺失的性别、年龄、身高信息
4.3 使用随机森林进行插补
5. 增加既往史(糖尿病、高血压)
6. 增加联合用药
6.1 提取联合用药
6.2 补充联合用药时间
6.3 提取tdm检测7天内的联合用药
6.4 删除缺失值>50%的列
7. 增加其他检测
7.1 肌酐(肾功能)
7.2 肝功能
7.3 血细胞分析
7.4 凝血
7.5 大便隐血
7.5 删除缺失值
出入院时间和入院诊断
高低剂量组
高低剂量组分组
高低剂量组统计
Medical DM数据处理流程:
1. 原始数据raw_data预处理
因为特定用药和联合用药都需要从doctor_order用药里面提取;tdm检测和其他检测都需要从df_test(df_test_record + df_test_result)提取;如果先简单预处理一下原始数据,后面直接用的话,会方便很多!不用提特定要的时候处理一次,提联合用药的时候再处理一次。
导入packages和自定义函数
# _*_ coding: utf-8 _*_
# @Time: 2021/10/27 17:51
# @Author: yuyongsheng
# @Software: PyCharm
# @Description:
# 导入程序包
import pandas as pd
pd.set_option('mode.chained_assignment', None)
import numpy as np
import os
project_path=os.getcwd()
# 导入预定义函数
# 字符串转换为时间格式
import datetime
def str_to_datetime(x):
try:
a = datetime.datetime.strptime(x, "%d/%m/%Y %H:%M:%S")
return a
except:
return np.NaN
# 过滤异常值
def filter_exce_value(df,feature):
# 过滤文字!!!!!!!!!!!!!!!!!!!!!!!!!!!
df=df[df[feature].str.contains('\d')]
# 过滤异常大值!!!!!!!!!!!!!!!!!!!!!!!!!!
median_value=df[feature].median()
df[feature]=df[feature].apply(lambda x: x if abs(float(x)) < (100 * abs(median_value)) else np.nan)
df=df[df[feature].notnull()]
return df
1.1 用药原始数据doctor_order预处理
# 原始数据集预处理:调整时间格式;异常值删除需求具体到特定药,最好不要笼统的删,因为此时剂量单位不统一;
# 而日剂量的计算和剂量单位统一,在具体到特定药后更简单
#%% md
## 用药原始数据doctor_order处理
df_doctor_order=pd.read_csv(project_path+'/data/raw_data/2-doctor_order.csv')
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
# 提取用药状态为停止的用药
df_doctor_order=df_doctor_order[df_doctor_order['statusdesc']=='停止']
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
# 并删除服药方式为“取药用”的样本
df_doctor_order=df_doctor_order[df_doctor_order['medication_way']!='取药用']
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
# 删除用药剂量为空的数据
df_doctor_order=df_doctor_order[(df_doctor_order['dosage'].astype('str').notnull()) & (df_doctor_order['dosage'].astype('str')!='nan')]
df_doctor_order=df_doctor_order.reset_index(drop=True)
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
# 删除重复数据
df_doctor_order=df_doctor_order.drop_duplicates(subset=['patient_id','case_no','drug_name','dosage','frequency','start_datetime','end_datetime'],keep='first')
df_doctor_order=df_doctor_order.reset_index(drop=True)
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
#%%
# 提取doctor_order里面的有效字段
df_doctor_order=df_doctor_order[['patient_id','case_no','long_d_order','drug_name','amount','drug_spec','dosage','frequency','medication_way','start_datetime','end_datetime']]
# 调整doctor_order开始服药时间和结束服药时间格式
df_doctor_order['start_datetime']=df_doctor_order['start_datetime'].apply(str_to_datetime)
df_doctor_order['end_datetime']=df_doctor_order['end_datetime'].apply(str_to_datetime)
print(df_doctor_order.shape)
print(df_doctor_order['patient_id'].nunique())
print(df_doctor_order['case_no'].nunique())
#%%
# 保存预处理后的原始用药数据doctor_order
writer=pd.ExcelWriter(project_path+'/data/pre_processed_raw_data/df_doctor_order.xlsx')
df_doctor_order.to_excel(writer)
writer.save()
1.2 诊断原始数据diagnostic预处理
## 诊断原始数据diagnostic处理
#%%
df_diagnostic=pd.read_csv(project_path+'/data/raw_data/3-diagnostic_record.csv',dtype={'case_no':str}) # dtype可以防止某一列因为pandas读取导致数据类型改变
print(df_diagnostic.shape)
print(df_diagnostic['patient_id'].nunique())
print(df_diagnostic['case_no'].nunique())
print(df_diagnostic)
#%%
# 删除诊断为空的数据
df_diagnostic=df_diagnostic[(df_diagnostic['diagnostic_content'].notnull())& (df_diagnostic['diagnostic_content'].astype('str')!='nan')]
print(df_diagnostic.shape)
print(df_diagnostic['patient_id'].nunique())
print(df_diagnostic['case_no'].nunique())
# 删除住院记录case_no为空的记录
df_diagnostic=df_diagnostic[(df_diagnostic['case_no'].notnull()) & (df_diagnostic['case_no'].astype('str')!='nan')]
df_diagnostic=df_diagnostic.reset_index(drop=True)
print(df_diagnostic.shape)
print(df_diagnostic['patient_id'].nunique())
print(df_diagnostic['case_no'].nunique())
print(df_diagnostic)
# 删除重复数据
df_diagnostic=df_diagnostic.drop_duplicates(subset=['patient_id','case_no','record_date','diagnostic_type','diagnostic_content'],keep='first')
df_diagnostic=df_diagnostic.reset_index(drop=True)
print(df_diagnostic.shape)
print(df_diagnostic['patient_id'].nunique())
print(df_diagnostic['case_no'].nunique())
#%%
# 调整diagnostic里面的时间格式
df_diagnostic['record_date']=df_diagnostic['record_date'].astype('str').apply(str_to_datetime)
# 提取diagnostic里面的有效字段
df_diagnostic=df_diagnostic[['patient_id','case_no','record_date','diagnostic_type','diagnostic_content']]
print(df_diagnostic)
#%%
# 保存预处理后的原始诊断数据diagnostic
writer=pd.ExcelWriter(project_path+'/data/pre_processed_raw_data/df_diagnostic.xlsx')
df_diagnostic.to_excel(writer)
writer.save()
1.3 检验原始数据test_record+test_result预处理
## 检验原始数据test_record+test_result处理
#%%
# 提取df_test,它是由rest_record和test_result合并而成,十分重要!!包含:tdm和安全性指标。
# 检测记录test_record
df_test_record=pd.read_csv(project_path+'/data/raw_data/4-test_record.csv',dtype={'case_no':str})
df_test_record=df_test_record[['test_record_id','patient_id','case_no','test_date','clinical_diagnosis']]
print(df_test_record.shape)
print(df_test_record['patient_id'].nunique())
print(df_test_record['case_no'].nunique())
# 删除test_date为空的记录
df_test_record=df_test_record[df_test_record['test_date'].notnull()]
print(df_test_record.shape)
print(df_test_record['patient_id'].nunique())
print(df_test_record['case_no'].nunique())
# 删除住院号case_no为空的记录
df_test_record=df_test_record[df_test_record['case_no'].notnull()]
df_test_record=df_test_record.reset_index(drop=True)
print(df_test_record.shape)
print(df_test_record['patient_id'].nunique())
print(df_test_record['case_no'].nunique())
# 删除test_record重复数据
df_test_record=df_test_record.drop_duplicates(subset=['test_record_id','patient_id','case_no','test_date','clinical_diagnosis'],keep='first')
df_test_record=df_test_record.reset_index(drop=True)
print(df_test_record.shape)
print(df_test_record['patient_id'].nunique())
print(df_test_record['case_no'].nunique())
# 调整检测时间格式
df_test_record['test_date']=df_test_record['test_date'].astype('str').apply(str_to_datetime)
print(df_test_record)
#%%
# 保存预处理后的test_record
writer=pd.ExcelWriter(project_path+'/data/pre_processed_raw_data/df_test_record.xlsx')
df_test_record.to_excel(writer)
writer.save()
#%%
# 检测结果test_result
df_test_result=pd.read_csv(project_path+'/data/raw_data/4-test_result.csv')
df_test_result=df_test_result[['test_record_id','project_name','test_result','refer_scope','synonym']]
print(df_test_result.shape)
# 删除检测项目project_name为空的数据
df_test_result=df_test_result[df_test_result['project_name'].notnull()]
print(df_test_result.shape)
# 删除test_result为空的数据
df_test_result=df_test_result[df_test_result['test_result'].notnull()]
df_test_result=df_test_result.reset_index(drop=True)
print(df_test_result.shape)
# 删除<>号
df_test_result['test_result']=df_test_result['test_result'].astype('str').apply(lambda x:x.replace('<',''))
df_test_result['test_result']=df_test_result['test_result'].astype('str').apply(lambda x:x.replace('>',''))
print(df_test_result)
# 删除test_result重复数据
df_test_result=df_test_result.drop_duplicates(subset=['test_record_id','project_name','test_result','refer_scope','synonym'],keep='first')
df_test_result=df_test_result.reset_index(drop=True)
print(df_test_result.shape)
#%%
# 保存预处理后的test_result,数据太大无法保存
# writer=pd.ExcelWriter(project_path+'/data/pre_processed_raw_data/df_test_result.xlsx')
# df_test_result.to_excel(writer)
# writer.save()
#%%
# 基于唯一性的test_record_id,合并test_record和test_result
df_test=pd.merge(df_test_record,df_test_result,on=['test_record_id'],how='inner')
print(df_test)
2. 纳排 纳排基准patient_id,case_no
-
- 纳入condition1
- 纳入condition2
- 排除condition1
- 排除condition2
纳入:提取服用利伐沙班的患者
# 纳排:提取服用利伐沙班的非瓣膜房颤患者
#%% md
## 纳入:提取服用利伐沙班的患者
#%%
# 1. 提取服用利伐沙班非瓣膜房颤患者
print('-------------------------1.提取提取服用利伐沙班的非瓣膜房颤患者------------------------------')
# 1.1 服用利伐沙班且出院记录中有房颤的患者
print('-------------------------提取服用利伐沙班的患者------------------------------')
# 提取服药利伐沙班的患者id
df_lfsb=df_doctor_order[df_doctor_order['drug_name'].str.contains('利伐沙班')]
df_lfsb=df_lfsb.reset_index(drop=True)
# 排序
df_lfsb=df_lfsb.sort_values(['patient_id','case_no','start_datetime'],ascending=[True,True,True])
df_lfsb=df_lfsb.reset_index(drop=True)
print(df_lfsb.shape)
print(df_lfsb['patient_id'].nunique())
print(df_lfsb['case_no'].nunique())
# print(df_lfsb)
#%%
# 保存利伐沙班用药记录
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.1_利伐沙班用药记录.xlsx')
df_lfsb.to_excel(writer)
writer.save()
#%%
df_lfsb
#%% md
纳入: 提取出院诊断房颤患者
#%% md
## 纳入: 提取出院诊断房颤患者
#%%
# 1.2 根据郑-诊断.xlsx,提取出院诊断房颤患者case_no,已进行合并纳入
print('-------------------------提取出院诊断房颤患者------------------------------')
df_oup_fib=df_diagnostic[(df_diagnostic['diagnostic_type']=='出院诊断') & (df_diagnostic['diagnostic_content'].str.contains(
'房颤射消融术后|心房扑动射频消融术后|心房颤动|阵发性心房颤动|持续性心房颤动|阵发性房颤|频发房性早搏|阵发性心房扑动|心房扑动|持续性房颤|房颤伴快速心室率\
|房颤射频消融术后|射频消融术后|快慢综合征|左心耳封堵术后|阵发性心房纤颤|心房颤动伴快速心室率|房颤|心房颤动射频消融术后|射频消融+左心耳封堵术后|左心耳封闭术后\
|心房颤动射频消融术后+左心耳封堵术|动态心电图异常:阵发性房颤、偶发房性早搏、偶发室性早搏、T波间歇性异常改变|左心房房颤射频消融+左心耳切除术后|永久性房颤\
|阵发性房颤射频消融术后|冷冻射频消融术后|心房颤动药物复律后'))]
df_oup_fib=df_oup_fib.sort_values(by=['patient_id','case_no','record_date'],ascending=[True,True,True])
df_oup_fib=df_oup_fib.reset_index(drop=True)
print(df_oup_fib.shape)
print(df_oup_fib['patient_id'].nunique())
print(df_oup_fib['case_no'].nunique())
print(df_oup_fib)
#%%
# 保存出院诊断房颤患者
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.2_出院诊断房颤患者记录.xlsx')
df_oup_fib.to_excel(writer)
writer.save()
合并利伐沙班用药和出院房颤诊断
print(type(df_lfsb.loc[0,'case_no']))
print(type(df_oup_fib.loc[0,'case_no']))
#%% md
## 合并利伐沙班用药和出院房颤诊断
#%%
# 调整利伐沙班用药的case_no格式
df_lfsb['case_no']=df_lfsb['case_no'].astype('str')
# 出院诊断
df_oup_fib=df_oup_fib.drop(['patient_id'],axis=1)
#%%
oup_fib_list=list(df_oup_fib['case_no'])
temp_list=[]
for i in np.unique(df_lfsb['case_no']):
temp=df_lfsb[df_lfsb['case_no']==i]
temp=temp.reset_index(drop=True)
if i in oup_fib_list:
temp_list.append(temp)
df_lfsb_oup=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_oup=pd.concat([df_lfsb_oup,temp_list[j]],axis=0)
df_lfsb_oup=df_lfsb_oup.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_oup.shape)
print(df_lfsb_oup['patient_id'].nunique())
print(df_lfsb_oup['case_no'].nunique())
#%%
print(df_lfsb_oup)
排除:膜瓣置换手术
#%% md
## 排除:膜瓣置换手术和瓣膜性房颤
#%%
# 1.3 提取瓣膜性房颤患者:手术中有膜瓣置换、诊断中为瓣膜性房颤。
print('-------------------------排除房颤相关的手术-----------------------------')
# 根据郑-手术.xlsx,排除膜瓣置换手术
df_surgical_record=pd.read_csv(project_path+'/data/raw_data/1-surgical_record.csv')
# df_surgical_valve=df_surgical_record[df_surgical_record['surgery_name'].str.contains('心脏病损腔内消融术|心脏病损腔内冷冻消融术|心电生理测定(EPS)|左心耳堵闭术|左心耳切除术|左心封堵术')]
df_surgical_valve=df_surgical_record[df_surgical_record['surgery_name'].str.contains('瓣膜置换')]
print(df_surgical_valve.shape)
print(df_surgical_valve['patient_id'].nunique())
print(df_surgical_valve['case_no'].nunique())
print(df_surgical_valve)
#%%
# 排除瓣膜置换手术的case_no
surgical_valve_list=list(df_surgical_record['case_no'])
temp_list=[]
for i in np.unique(df_lfsb_oup['case_no']):
temp=df_lfsb_oup[df_lfsb_oup['case_no']==i]
temp=temp.reset_index(drop=True)
if i in surgical_valve_list:
continue
else:
temp_list.append(temp)
df_lfsb_not_surgery=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_not_surgery=pd.concat([df_lfsb_not_surgery,temp_list[j]],axis=0)
df_lfsb_not_surgery=df_lfsb_not_surgery.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_not_surgery.shape)
print(df_lfsb_not_surgery['patient_id'].nunique())
print(df_lfsb_not_surgery['case_no'].nunique())
排除:诊断中瓣膜性房颤
#%% md
## 排除:诊断中瓣膜性房颤
#%%
# 排除临床诊断中瓣膜性房颤,包含:心脏瓣膜病和风湿性瓣膜病;不包括下肢静脉瓣膜病
print('-------------------------排除瓣膜性房颤患者-----------------------------')
# 删除临床诊断中的空值
df_clinical_diagnosis=df_test_record[df_test_record['clinical_diagnosis'].notnull()] # 非空
df_heart_valve=df_clinical_diagnosis[df_clinical_diagnosis['clinical_diagnosis'].str.contains('瓣膜')]
df_heart_valve=df_heart_valve[df_heart_valve['clinical_diagnosis'].str.contains('心脏|风湿性')]
df_heart_valve['case_no']=df_heart_valve['case_no'].astype('str')
#%%
print(df_heart_valve.shape)
print(df_heart_valve['patient_id'].nunique())
print(df_heart_valve['case_no'].nunique())
#%%
# 排除瓣膜房颤的case_no
diagnosis_valve_list=list(df_heart_valve['case_no'])
temp_list=[]
for i in np.unique(df_lfsb_not_surgery['case_no']):
temp=df_lfsb_not_surgery[df_lfsb_not_surgery['case_no']==i]
temp=temp.reset_index(drop=True)
if i in diagnosis_valve_list:
continue
else:
temp_list.append(temp)
df_lfsb_not_valve=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_not_valve=pd.concat([df_lfsb_not_valve,temp_list[j]],axis=0)
df_lfsb_not_valve=df_lfsb_not_valve.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_not_valve.shape)
print(df_lfsb_not_valve['patient_id'].nunique())
print(df_lfsb_not_valve['case_no'].nunique())
#%%
# 保存利伐沙班非置换非瓣膜
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_temp_利伐沙班非置换非瓣膜.xlsx')
df_lfsb_not_valve.to_excel(writer)
writer.save()
3. 计算利伐沙班用药日剂量
#%% md
## 计算利伐沙班用药日剂量
#%%
# 1.5计算利伐沙班用药日剂量
print('-------------------------计算出院时利伐沙班用药日剂量------------------------------')
print(np.unique(df_lfsb['frequency']))
# 一片利伐沙班10mg
df_lfsb_not_valve['dosage']=df_lfsb_not_valve['dosage'].apply(lambda x: x.replace('mg', '') if 'mg' in x else 10 if '片' in x else x)
third=['1/72小时']
half=['1/2日','1/隔日']
one=['1/午','1/单日','1/日','1/日(餐前)','1/早','1/晚','Qd','Qd(8am)']
two=['1/12小时','12/日','2/日']
three=['Tid']
df_lfsb_not_valve['frequency']=df_lfsb_not_valve['frequency'].apply(lambda x: 0.33 if x in third else
0.5 if x in half else
1 if x in one else
2 if x in two else
3 if x in three else x)
#%%
# # print(df_lfsb_not_valve.to_string())
# writer=pd.ExcelWriter(project_path+'/data/processed_data/df_temp_利伐沙班frequency处理.xlsx')
# df_lfsb_not_valve.to_excel(writer)
# writer.save()
#%%
df_lfsb_not_valve['日剂量']=df_lfsb_not_valve['dosage'].astype('float') * df_lfsb_not_valve['frequency'].astype('float')
#%%
print(df_lfsb_not_valve.shape)
print(df_lfsb_not_valve['patient_id'].nunique())
print(df_lfsb_not_valve['case_no'].nunique())
#%%
df_lfsb_not_valve
#%%
# 合并同一case_no的多次用药数据,取最后一次日剂量作为最终日剂量
temp_list=[]
for i in np.unique(df_lfsb_not_valve['case_no']):
temp=df_lfsb_not_valve[df_lfsb_not_valve['case_no']==i]
temp=temp.reset_index(drop=True)
if temp.shape[0]>1:
temp.loc[0,'日剂量']=temp.loc[(temp.shape[0]-1),'日剂量']
temp=temp.drop_duplicates(['case_no'],keep='first')
temp_list.append(temp)
df_lfsb_drug=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_drug=pd.concat([df_lfsb_drug,temp_list[j]],axis=0)
del temp_list
df_lfsb_drug=df_lfsb_drug.reset_index(drop=True)
# 提取利伐沙班有效字段
df_lfsb_drug=df_lfsb_drug[['patient_id','case_no','start_datetime','end_datetime','日剂量']]
#%%
print(df_lfsb_drug.shape)
print(df_lfsb_drug['patient_id'].nunique())
print(df_lfsb_drug['case_no'].nunique())
#%%
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.3_计算出院时利伐沙班日剂量.xlsx')
df_lfsb_drug.to_excel(writer)
writer.save()
TDM检测信息
# 保存特定用药的检测记录
df_test_tcms = df_test[df_test['project_name'].str.contains('他克莫司')]
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_temp_他克莫司检测结果.xlsx')
df_test_tcms.to_excel(writer)
writer.save()
合并用药和tdm检测
# 合并自身免疫疾病病人的他克莫司用药和tdm检测数据
print('----------------------合并自身免疫疾病病人的他克莫司用药和tdm检测数据------------------------------')
drug_test_tcms = pd.merge(drug_tcms_frequency_l,test_record_result_tdm, on=['patient_id'/'case_no'], how='inner')
# 时间字段要注意数据类型,有些时间字段为str,有些为timestamp,类型冲突会报错
drug_test_tcms['start_datetime'] = drug_test_tcms['start_datetime'].astype('str').apply(str_to_datetime)
drug_test_tcms['test_date'] = drug_test_tcms['test_date'].astype('str').apply(str_to_datetime)
# end_datetime为空的数据赋值为start_datetime
aaa = drug_test_tcms[drug_test_tcms['end_datetime'].isnull()]
bbb = drug_test_tcms[drug_test_tcms['end_datetime'].notnull()]
aaa['end_datetime'] = aaa['start_datetime']
drug_test_tcms = pd.concat([aaa, bbb], axis=0)
drug_test_tcms = drug_test_tcms.sort_values(by=['patient_id'],ascending=True)
drug_test_tcms = drug_test_tcms.reset_index(drop=True)
drug_test_tcms['end_datetime'] = drug_test_tcms['end_datetime'].astype('str').apply(str_to_datetime)
print(drug_test_tcms.shape) # (3125,15)
print(len(np.unique(drug_test_tcms['patient_id']))) # 149
writer = pd.ExcelWriter(project_path + '/result/df_6_合并自身免疫病人的他克莫司用药和tdm检测数据.xlsx')
drug_test_tcms.to_excel(writer)
writer.save()
tdm检测前7天内有他克莫司用药
drug_test_tcms = drug_test_tcms.sort_values(by=['patient_id/case_no', 'test_date', 'start_datetime'],
ascending=[True, True, False])
drug_test_tcms = drug_test_tcms.reset_index(drop=True)
drug_test_tcms_frequency = drug_test_tcms[(drug_test_tcms['test_date'] - datetime.timedelta(days=15) <= drug_test_tcms['end_datetime'])&
(drug_test_tcms['start_datetime'] <= drug_test_tcms['test_date'] - datetime.timedelta(days=1))]
drug_test_tcms_frequency = drug_test_tcms_frequency.reset_index()
del drug_test_tcms_frequency['index']
print(drug_test_tcms_frequency.shape) # 7天,(384,20);
print(len(np.unique(drug_test_tcms_frequency['patient_id']))) # 88
drug_test_tcms_frequency =drug_test_tcms_frequency.sort_values(by=['patient_id','start_datetime'],ascending=[True,False])
drug_test_tcms_frequency=drug_test_tcms_frequency.reset_index(drop=True)
writer = pd.ExcelWriter(project_path + '/result/df_8_tdm检测前7天的他克莫司用药数据.xlsx')
drug_test_tcms_frequency.to_excel(writer)
writer.save()
同一病人相邻两次TDM检测间隔15天判断
# 检测时间test_date升序,用药时间start_datetime降序,方便后面7-15天筛选。这样选出来是第一条tdm检测和最后一次用药。
drug_test_tcms_frequency = drug_test_tcms_frequency.sort_values(by=['patient_id', 'test_date', 'start_datetime'],
ascending=[True, True, False])
drug_test_tcms_frequency['test_date']=drug_test_tcms_frequency['test_date'].astype('str').apply(str_to_datetime)
all_id = []
for i in np.unique(drug_test_tcms_frequency['patient_id']):
temp = drug_test_tcms_frequency[drug_test_tcms_frequency['patient_id'] == i]
temp = temp.reset_index()
del temp['index']
between_id = []
j = 0
while j < temp.shape[0]:
# 取出符合要求的第一次tdm检测数据,其中他克莫司服药因为之前时间倒序排序,取得是最近的一次用药。
between_id.append(temp.iloc[[j]]) # .iloc[[i]]取出dataframe;.loc[i]取出series
k = j + 1
# 同一个病人id的第j次tdm检测和第k次tdm检测,15天间隔
while k < temp.shape[0]:
# 两次tdm检测在15天内,只保留第一条。
if temp.loc[j, 'test_date'] >= temp.loc[k, 'test_date'] - datetime.timedelta(days=15):
k += 1
continue
else:
break
# 两次tdm检测间隔15天及以上,认为相互独立,break,将k赋值给j,下一次j循环会将k对应的tdm检测存入between_id
j = k
temp_between = between_id[0]
for m in range(1, len(between_id)):
temp_between = pd.concat([temp_between, between_id[m]], axis=0) # list转换为DateFrame
temp_between = temp_between.reset_index()
del temp_between['index']
all_id.append(temp_between)
drug_test_tcms_15 = all_id[0]
for n in range(1, len(all_id)):
drug_test_tcms_15 = pd.concat([drug_test_tcms_15, all_id[n]], axis=0)
drug_test_tcms_15 = drug_test_tcms_15.reset_index()
del drug_test_tcms_15['index']
print(drug_test_tcms_15.shape) # 7天,(102,20);
print(len(np.unique(drug_test_tcms_15['patient_id']))) # 88
writer = pd.ExcelWriter(project_path + '/result/df_9_两次他克莫司检测间隔15天判断.xlsx')
drug_test_tcms_15.to_excel(writer)
writer.save()
4. 合并人口信息学数据
4.1 合并人口学特征
#%% md
## 合并人口信息学数据
#%%
# 1.5 合并人口信息学数据
print('-------------------------合并人口信息学数据-----------------------------')
df_popu=pd.read_excel(project_path+'/data/raw_data/1.基本信息(诊断非瓣膜房颤用利伐沙班).xlsx')
if 'Unnamed: 0' in df_popu.columns:
df_popu = df_popu.drop(['Unnamed: 0'], axis=1)
df_popu=df_popu[['case_no','gender','age','height','weight','BMI']]
# 删除人口信息学重复数据,只保留第一条
df_popu=df_popu.drop_duplicates(subset=['case_no'],keep='first')
#%%
print(type(df_popu.loc[0,'case_no']))
print(type(df_lfsb_drug.loc[0,'case_no']))
#%%
# 将df_popu的case_no格式调整为str
df_popu['case_no']=df_popu['case_no'].astype('str')
df_lfsb_popu=pd.merge(df_lfsb_drug,df_popu,on=['case_no'],how='left')
#%%
print(df_lfsb_popu.shape)
print(df_lfsb_popu['patient_id'].nunique())
print(df_lfsb_popu['case_no'].nunique())
#%%
print(df_lfsb_popu)
4.2 补充缺失的性别、年龄、身高信息
# 补充缺失的性别、年龄、身高信息
# 读取patient_info-包含性别和年龄;patient_sign_record-包含身高
df_patient_info=pd.read_csv(project_path+'/data/raw_data/1-patient_info.csv')
df_patient_info = df_patient_info.set_index('patient_id')
df_patient_sign_record=pd.read_csv(project_path+'/data/raw_data/1-patient_sign_record.csv')
df_height = df_patient_sign_record[df_patient_sign_record['sign_type'] == '身高(cm)']
# 删除空值
df_height = df_height[df_height['record_content'].notnull()]
# 删除重复数据
df_height = df_height.drop_duplicates[subset['patient_id','case_no','sign_type','record_content']]
# 删除异常值
df_height = filter_exce_value(df_height,'record_content')
df_weight = df_patient_sign_record[df_patient_sign_record['sign_type'] == '体重(kg)']
# 删除空值
df_weight = df_weight[df_weight['record_content'].notnull()]
# 删除重复数据
df_weight = df_weight.drop_duplicates[subset['patient_id','case_no','sign_type','record_content']]
# 删除异常值
df_weight = filter_exce_value(df_weight,'record_content')
aaa=df_lfsb_popu[df_lfsb_popu['gender'].isnull()]
bbb=df_lfsb_popu[df_lfsb_popu['gender'].notnull()]
aaa_list=[]
for i in np.unique(aaa['patient_id']):
# print(i)
temp=aaa[aaa['patient_id']==i]
temp=temp.reset_index(drop=True)
# 提取缺失的性别数据
gender=df_patient_info.loc[i,'gender']
if gender =='男':
gender_value=1
else:
gender_value=0
temp['gender']=gender_value
# 提取缺失的年龄数据
age=df_patient_info.loc[i,'birth_year']
age_year=age.split('-')[0]
start_datetime=temp.loc[0,'start_datetime']
start_year=str(start_datetime).split('-')[0]
# start_year=start_time[0:3]
age_value=int(start_year)-int(age_year)
temp['age']=age_value
# 提取身高信息
temp_height= df_height[df_height['patient_id']==i]
temp_height=temp_height.reset_index(drop=True)
height=temp_height.loc[0,'record_content']
temp['height']=height
# if height=='卧床' or height=='轮椅':
# temp['height']=np.nan
# else:
# temp['height']=height
# 提取体重信息
temp_weight= df_weight[df_weight['patient_id']==i]
temp_weight=temp_weight.reset_index(drop=True)
weight=temp_weight.loc[0,'record_content']
temp['weight']=weight
# if height=='卧床' or height=='轮椅':
# temp['height']=np.nan
# else:
# temp['height']=height
aaa_list.append(temp)
aaa=aaa_list[0]
for j in range(1,len(aaa_list)):
aaa=pd.concat([aaa,aaa_list[j]],axis=0)
df_lfsb_popu=pd.concat([aaa,bbb],axis=0)
df_lfsb_popu=df_lfsb_popu.sort_values(['patient_id'])
df_lfsb_popu=df_lfsb_popu.reset_index(drop=True)
print(df_lfsb_popu.shape)
print(df_lfsb_popu['patient_id'].nunique())
print(df_lfsb_popu['case_no'].nunique())
df_lfsb_popu
4.3 使用随机森林进行插补
# 使用随机森林对缺失值进行插补
import pandas as pd
pd.set_option('mode.chained_assignment', None)
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
def missing_value_interpolation(df,missing_list=[]):
df = df.reset_index(drop=True)
# 提取存在缺失值的列名
if not missing_list:
for i in df.columns:
if df[i].isnull().sum() > 0:
missing_list.append(i)
missing_list_copy = missing_list.copy()
# 用该列未缺失的值训练随机森林,然后用训练好的rf预测缺失值
for i in range(len(missing_list)):
name=missing_list[0]
df_missing = df[missing_list_copy]
# 将其他列的缺失值用0表示。
missing_list.remove(name)
for j in missing_list:
df_missing[j]=df_missing[j].astype('str').apply(lambda x: 0 if x=='nan' else x)
df_missing_is = df_missing[df_missing[name].isnull()]
df_missing_not = df_missing[df_missing[name].notnull()]
y = df_missing_not[name]
x = df_missing_not.drop([name],axis=1)
# 列出参数列表
tree_grid_parameter = {'n_estimators': list((10, 50, 100, 150, 200))}
# 进行参数的搜索组合
grid = GridSearchCV(RandomForestRegressor(),param_grid=tree_grid_parameter,cv=3)
#rfr=RandomForestRegressor(random_state=0,n_estimators=100,n_jobs=-1)
#根据已有数据去拟合随机森林模型
grid.fit(x, y)
rfr = RandomForestRegressor(n_estimators=grid.best_params_['n_estimators'])
rfr.fit(x, y)
#预测缺失值
predict = rfr.predict(df_missing_is.drop([name],axis=1))
#填补缺失值
df.loc[df[name].isnull(),name] = predict
return df
# 对性别、年龄、身高、体重等列进行插补
df_lfsb_popu=missing_value_interpolation(df_lfsb_popu,['gender','age','height','weight','BMI'])
#%%
# 统计年龄分布
df_age_stats=df_lfsb_popu.drop_duplicates(subset=['patient_id'],keep='first')
print(df_age_stats['age'].describe())
#%%
# 保存人口学特征
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.4_合并人口信息学特征的非瓣膜房颤患者.xlsx')
df_lfsb_popu.to_excel(writer)
writer.save()
5. 增加既往史(糖尿病、高血压)
## 增加糖尿病病史
#%%
# 其他糖尿病的诊断
df_diagnostic_dm=df_diagnostic[df_diagnostic['diagnostic_content'].str.contains('糖尿病|高血压')]
# 删除重复诊断的case_no
df_diagnostic_dm=df_diagnostic_dm.drop_duplicates(['case_no','diagnostic_content'],keep='first')
df_diagnostic_dm=df_diagnostic_dm.reset_index(drop=True)
#%%
print(df_diagnostic_dm.shape)
print(df_diagnostic_dm['patient_id'].nunique())
print(df_diagnostic_dm['case_no'].nunique())
#%%
df_diagnostic_dm
#%%
# 提取糖尿病患者case_no列表
dm_list=list(df_diagnostic_dm[df_diagnostic_dm['diagnostic_content']=='糖尿病']['case_no'])
htn_list=list(df_diagnostic_dm[df_diagnostic_dm['diagnostic_content']=='高血压']['case_no'])
print(dm_list[0])
print(type(dm_list[0]))
#%%
# 并入纳排数据中
temp_list=[]
for i in np.unique(df_lfsb_popu['case_no']):
temp=df_lfsb_popu[df_lfsb_popu['case_no']==i]
temp=temp.reset_index(drop=True)
if i in dm_list:
temp['糖尿病']=1
else:
temp['糖尿病']=0
if i in htn_list:
temp['高血压']=1
else:
temp['高血压']=0
temp_list.append(temp)
#%%
df_lfsb_merge_dm=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_merge_dm=pd.concat([df_lfsb_merge_dm,temp_list[j]],axis=0)
df_lfsb_merge_dm=df_lfsb_merge_dm.sort_values(by=['patient_id','case_no','start_datetime'])
df_lfsb_merge_dm=df_lfsb_merge_dm.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_merge_dm.shape)
print(df_lfsb_merge_dm['patient_id'].nunique())
print(df_lfsb_merge_dm['case_no'].nunique())
#%%
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.7_增加糖尿病检验信息.xlsx')
df_lfsb_merge_dm.to_excel(writer)
writer.save()
6. 增加联合用药
6.1 提取联合用药
# 根据业务需求制定的联合用药范围,用0-1表示联合用药就行。
1.6 增加联合用药
# 糖皮质激素(地塞米松、甲泼尼龙、泼尼松、可的松)名称统一
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'糖皮质激素' if '地塞米松' in x else
'糖皮质激素' if '甲泼尼龙' in x else
'糖皮质激素' if '泼尼松' in x else
'糖皮质激素' if '可的松' in x else x)
# 质子泵抑制剂(奥美拉唑、泮托拉唑、艾普拉唑、雷贝拉唑、兰索拉唑、雷尼替丁)名称统一
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'质子泵抑制剂' if '奥美拉唑' in x else
'质子泵抑制剂' if '泮托拉唑' in x else
'质子泵抑制剂' if '艾普拉唑' in x else
'质子泵抑制剂' if '雷贝拉唑' in x else
'质子泵抑制剂' if '兰索拉唑' in x else
'质子泵抑制剂' if '雷尼替丁' in x else x)
# 钙离子阻抗剂(硝苯地平、氨氯地平、尼群地平、非洛地平、地尔硫卓)名称统一
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'钙离子阻抗剂' if '硝苯地平' in x else
'钙离子阻抗剂' if '氨氯地平' in x else
'钙离子阻抗剂' if '尼群地平' in x else
'钙离子阻抗剂' if '非洛地平' in x else
'钙离子阻抗剂' if '地尔硫卓' in x else x)
# 其他免疫抑制剂(环孢素、吗替麦考酚酯、环磷酰胺、硫唑嘌呤、甲氨蝶呤)名称统一
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'其他免疫抑制剂' if '环孢素' in x else
'其他免疫抑制剂' if '吗替麦考酚酯' in x else
'其他免疫抑制剂' if '环磷酰胺' in x else
'其他免疫抑制剂' if '硫唑嘌呤' in x else
'其他免疫抑制剂' if '甲氨蝶呤' in x else x)
# 克拉霉素
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'克拉霉素' if '克拉霉素' in x else x)
# 阿奇霉素
doctor_order['drug_name']=doctor_order['drug_name'].astype('str').apply(lambda x:'阿奇霉素' if '阿奇霉素' in x else x)
# 提取其他联合用药记录
drug_other=doctor_order[doctor_order['drug_name'].str.contains('糖皮质激素|质子泵抑制剂|钙离子阻抗剂|其他免疫抑制剂|克拉霉素|阿奇霉素')]
drug_other=drug_other.reset_index(drop=True)
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_提取联合用药.xlsx')
drug_other.to_excel(writer)
writer.save()
6.2 补充联合用药时间
# 将其他用药的缺失的end_datetime替换为start_datetime
# end_datetime为空的数据赋值为start_datetime
aaa = drug_other[drug_other['end_datetime'].isnull()]
bbb = drug_other[drug_other['end_datetime'].notnull()]
aaa['end_datetime'] = aaa['start_datetime']
drug_other = pd.concat([aaa, bbb], axis=0)
drug_other = drug_other.sort_values(by=['patient_id'],ascending=True)
drug_other = drug_other.reset_index(drop=True)
drug_other['end_datetime'] = drug_other['end_datetime'].astype('str').apply(str_to_datetime)
# print(drug_other.shape) # (19728,9)
# print(drug_other['patient_id'].nunique()) # 948
writer = pd.ExcelWriter(project_path + '/processed_data/df_补充联合用药时间.xlsx')
drug_other.to_excel(writer)
writer.save()
6.3 提取tdm检测7天内的联合用药
all_id = []
for i in np.unique(tdm_7_other_interpolation['patient_id']):
# 根据patient_id进行第一次分类
tdm_time = tdm_7_other_interpolation[tdm_7_other_interpolation['patient_id'] == i] # 他克莫司的他克莫司用药记录
# 检测时间排序
tdm_time = tdm_time.sort_values(by=['test_date'], ascending=True)
tdm_time = tdm_time.reset_index()
del tdm_time['index']
# 根据patient_id筛选出其他联合用药,并提取有效字段
temp = drug_other[drug_other['patient_id'] == i]
# 修改其他用药的字段名称,避免与tdm检测合并时发生字段名冲突
temp_drug_other = temp_drug_other.rename(columns={'drug_name': 'drug_name_other','start_datetime':'start_datetime_other',
'end_datetime':'end_datetime_other'})
# 检测时间排序
temp_drug_other = temp_drug_other.sort_values(by=['start_datetime_other'], ascending=True)
temp_drug_other = temp_drug_other.reset_index(drop=True)
# 5.1,根据不同的tdm_time进行第二次数据分组
between_id = []
for j in range(tdm_time.shape[0]):
tdm_time_1 = tdm_time.iloc[[j]]
time_1 = tdm_time.loc[j,'test_date']
last_id = []
for k in range(temp_drug_other.shape[0]):
# 筛选tdm前7天内的其他用药
if (time_1 - datetime.timedelta(days=8) <= temp_drug_other.loc[k,'end_datetime_other']) & (time_1 - datetime.timedelta(days=1) >= temp_drug_other.loc[k,'start_datetime_other']):
last_id.append(temp_drug_other.iloc[[k]])
if last_id:
temp_last = last_id[0]
for m in range(1, len(last_id)):
temp_last = pd.concat([temp_last, last_id[m]], axis=0)
# 5.2,根据patient_id、drug_name_other进行最近一次的筛选
temp_last = temp_last.drop_duplicates(subset=['patient_id', 'drug_name_other'], keep='first')
drug_other_list=list(temp_last['drug_name_other'])
# 5.3,将筛选出来的7天之内的最后一次其他检测的具体指标转换为0-1列,整合到建模数据中
for drug_other in drug_other_list:
tdm_time_1[drug_other] = 1
between_id.append(tdm_time_1)
# 将patient_id下所有符合要求的其他用药数据合并。
temp_between = between_id[0]
for m in range(1, len(between_id)):
temp_between = pd.concat([temp_between, between_id[m]], axis=0)
temp_between = temp_between.reset_index()
del temp_between['index']
all_id.append(temp_between)
# 将所有patient_id的其他用药数据进行合并
drug_other_7_select = all_id[0]
for n in range(1, len(all_id)):
drug_other_7_select = pd.concat([drug_other_7_select, all_id[n]], axis=0)
drug_other_7_select=drug_other_7_select.reset_index(drop=True)
print(drug_other_7_select.shape) # (106,27)
print(len(np.unique(drug_other_7_select['patient_id']))) # 88
6.4 删除缺失值>50%的列
# 删除缺失超过50%的其他联合用药
for i in np.unique(drug_other_7_select.columns):
other_up = drug_other_7_select[i].isnull().sum()
other_down = drug_other_7_select[i].shape[0]
if drug_other_7_select[i].isnull().sum()/drug_other_7_select[i].shape[0] >= 0.5:
del drug_other_7_select[i]
# 糖皮质激素|质子泵抑制剂|钙离子阻抗剂|其他免疫抑制剂|克拉霉素|阿奇霉素,
# 将空值替换为0
drug_other_7_select.fillna(0)
# 如果是单列
# drug_other_7_select['糖皮质激素'].fillna(0,inplace=True)
print(drug_other_7_select.shape) # (106,23)
print(len(np.unique(drug_other_7_select['patient_id']))) # 88
writer = pd.ExcelWriter(project_path + '/result/df_提取tdm检测7天内最近的其他联合用药.xlsx')
drug_other_7_select.to_excel(writer)
7. 增加其他检测
7.1 肌酐(肾功能)
## 肌酐(肾功能)
df_test_cr = df_test[df_test['test_purpose'].str.contains('肌酐(Cr)|肾功五项(UREA,CR,UA,TCO2,CysC)|肾功四项(UREA,CR,UA,TCO2)')]
df_result_cr = df_test_cr[df_test_cr['synonym']=='Cys-C']
df_result_cr=df_result_cr[['patient_id','case_no','test_result']]
df_lfsb_cr = pd.merge(df_lfsb_merge_dm, df_result_cr,
on=['patient_id','case_no'],
how='left')
df_lfsb_cr.to_excel(r'Cys-C.xlsx')
7.2 肝功能
df_test_liver = df_test[ df_test['test_purpose']=='肝功1(7项;ALT,AST,TP,ALB,G,TBIL,DBIL)']
df_test_liver = df_test_liver[df_test_liver['synonym']=='DBIL']
df_lfsb_liver = pd.merge(df_lfsb_cr, df_test_liver, on=['patient_id','case_no'], how='left')
df_lfsb_liver.to_excel(r'DBIL.xlsx')
7.3 血细胞分析
# 血小板,红细胞,白细胞,血红蛋白
df_test_blood = df_test[df_test['test_purpose']=='血细胞分析(五分类)']
df_test_blood = df_test_blood[df_test_blood['project_name']=='血红蛋白测定']
df_lfsb_blood = pd.merge(df_lfsb_liver, df_test_blood, on=['patient_id','case_no'], how='left')
df_lfsb_blood.to_excel(r'血红蛋白测定.xlsx')
7.4 凝血
df_test_bc = df_test[ df_test['test_purpose'].str.contains('凝血')]
df_test_bc = df_test_bc[df_test_bc['project_name'].str.contains('凝血')]
df_lfsb_bc = pd.merge(df_lfsb_blood, df_test_bc, on=['patient_id','case_no'], how='left')
df_lfsb_bc.to_excel(r'凝血.xlsx')
7.5 大便隐血
# 尿常规,大便常规
#%%
df_test_sb = df_test[df_test['test_purpose'].str.contains('粪便'))]
df_test_sb =df_test_sb[df_test_sb['project_name'].str.contains('隐血')]
df_lfsb_sb = pd.merge(df_lfsb_bc, df_test_sb, on=['patient_id','case_no'], how='left')
df_lfsb_sb.to_excel(r'粪便-隐血.xlsx')
7.5 删除缺失值
## 删除缺失过多(>50%)的列
#%%
# 删除列超过50%的其他指标
for i in np.unique(df_lfsb_merge_dm.columns):
other_up = df_lfsb_merge_dm[i].isnull().sum()
other_down = df_lfsb_merge_dm[i].shape[0]
if df_lfsb_merge_dm[i].isnull().sum()/df_lfsb_merge_dm[i].shape[0] >= 0.5:
del df_lfsb_merge_dm[i]
#%%
print(df_lfsb_merge_dm.shape)
print(df_lfsb_merge_dm['patient_id'].nunique())
print(df_lfsb_merge_dm['case_no'].nunique())
#%%
# 排序
df_lfsb_merge_dm=df_lfsb_merge_dm.sort_values(['patient_id','case_no','start_datetime'])
#%%
# 保存删除缺失值过大的数据
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_1.8_删除缺失过多的列.xlsx')
df_lfsb_merge_dm.to_excel(writer)
writer.save()
出入院时间和入院诊断
#%% md
# 计算多次出入院
#%% md
## 提取入院诊断
#%%
# 入院诊断: 补充诊断、初步诊断、门诊诊断、修正诊断、最后诊断
df_diagnostic_inp=df_diagnostic[df_diagnostic['diagnostic_type'].str.contains('补充诊断|初步诊断|门诊诊断|修正诊断|最后诊断|出院诊断')]
# 删除空值
df_diagnostic_inp=df_diagnostic_inp[df_diagnostic_inp['case_no'].notnull()]
# 入院诊断case_no格式调整:由float转为str
df_diagnostic_inp['case_no']=df_diagnostic_inp['case_no'].astype('int').astype('str')
df_diagnostic_inp=df_diagnostic[['patient_id','case_no','record_date','diagnostic_type','diagnostic_content']]
#%%
print(df_diagnostic_inp.shape)
#%%
# 合并同一case_no的入院诊断
temp_list=[]
for i in np.unique(df_diagnostic_inp['case_no']):
temp=df_diagnostic_inp[df_diagnostic_inp['case_no']==i]
temp=temp.reset_index(drop=True)
temp_diagnostic_list=list(temp['diagnostic_content'])
temp=temp.drop_duplicates(subset=['case_no'],keep='first')
temp_diagnostic_str=';'.join(temp_diagnostic_list)
temp['diagnostic_content']=temp_diagnostic_str
# print(temp)
temp_list.append(temp)
# j=0
# while j < temp.shape[0]-1:
# # print(i)
# temp.loc[j+1,'diagnostic_content']=temp.loc[j,'diagnostic_content'] +';'+temp.loc[j+1,'diagnostic_content']
# temp=temp.drop(index=[j],axis=0)
# temp=temp.reset_index(drop=True)
# temp=temp.drop_duplicates(subset=['case_no'],keep='last')
# j+=1
# temp_list.append(temp)
#%%
df_diagnostic_inp_merge=temp_list[0]
for j in range(1,len(temp_list)):
df_diagnostic_inp_merge=pd.concat([df_diagnostic_inp_merge,temp_list[j]],axis=0)
del temp_list
df_diagnostic_inp_merge=df_diagnostic_inp_merge.reset_index(drop=True)
#%%
df_diagnostic_inp_merge
#%%
print(df_diagnostic_inp_merge.shape)
print(df_diagnostic_inp_merge['patient_id'].nunique())
print(df_diagnostic_inp_merge['case_no'].nunique())
#%%
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_2.1_提取入院诊断.xlsx')
df_diagnostic_inp_merge.to_excel(writer)
writer.save()
#%% md
## 提取出入院时间
#%%
# 2.计算多次出入院时间,case_no
df_inp_record=pd.read_csv(project_path+'/data/raw_data/1-inp_record.csv',dtype={'case_no':str})
#%%
# 删除空值数据
df_inp_record=df_inp_record[df_inp_record['adm_date'].notnull() & df_inp_record['dis_date'].notnull()]
#%%
print(df_inp_record.shape)
print(df_inp_record['patient_id'].nunique())
print(df_inp_record['case_no'].nunique())
#%%
# 调整出入院时间格式
df_inp_record['adm_date']=df_inp_record['adm_date'].astype('str').apply(str_to_datetime)
df_inp_record['dis_date']=df_inp_record['dis_date'].astype('str').apply(str_to_datetime)
#%%
# 提取出入院时间有效字段
df_inp_record=df_inp_record[['patient_id','case_no','adm_date','care_area','dis_date']]
df_inp_record=df_inp_record.sort_values(by=['patient_id','case_no','adm_date'])
df_inp_record=df_inp_record.reset_index(drop=True)
#%%
print(df_inp_record.shape)
print(df_inp_record['patient_id'].nunique())
print(df_inp_record['case_no'].nunique())
#%%
# 保存多次出入院时间
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_temp_保存多次出入院时间.xlsx')
df_inp_record.to_excel(writer)
writer.save()
#%% md
## 剂量分组,统计第二次入院
高低剂量组
高低剂量组分组
#%%
# 3.按剂量10、15、20分组,需要以第一次出院日剂量为标准分组,不能直接lambda,然后计算再次入院率
# 先排序
df_lfsb_merge_dm=df_lfsb_merge_dm.sort_values(['patient_id','case_no','start_datetime'])
# 分组
temp_list=[]
for i in np.unique(df_lfsb_merge_dm['patient_id']):
temp=df_lfsb_merge_dm[df_lfsb_merge_dm['patient_id']==i]
temp=temp.reset_index(drop=True)
dosage=temp.loc[0,'日剂量']
if dosage==10:
temp['剂量分组']=0
elif dosage==15:
temp['剂量分组']=1
elif dosage==20:
temp['剂量分组']=2
else:
temp['剂量分组']=np.nan
temp_list.append(temp)
#%%
df_lfsb_group=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_group=pd.concat([df_lfsb_group,temp_list[j]],axis=0)
df_lfsb_group=df_lfsb_group.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_group.shape)
print(df_lfsb_group['patient_id'].nunique)
print(df_lfsb_group['case_no'].nunique)
#%%
# 提取分组数据
df_lfsb_group=df_lfsb_group[df_lfsb_group['剂量分组'].notnull()]
#%%
print(df_lfsb_group.shape)
print(df_lfsb_group['patient_id'].nunique())
print(df_lfsb_group['case_no'].nunique())
#%%
# 保存分组数据
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_2.2_保存分组数据.xlsx')
df_lfsb_group.to_excel(writer)
writer.save()
高低剂量组统计
#%%
# 提取单个剂量分组
df_lfsb_10=df_lfsb_group[df_lfsb_group['剂量分组']==0]
df_lfsb_15=df_lfsb_group[df_lfsb_group['剂量分组']==1]
df_lfsb_20=df_lfsb_group[df_lfsb_group['剂量分组']==2]
#%%
# 统计分组数
num_10_patient=df_lfsb_10['patient_id'].nunique()
num_10_case=df_lfsb_10['case_no'].nunique()
num_15_patient=df_lfsb_15['patient_id'].nunique()
num_15_case=df_lfsb_15['case_no'].nunique()
num_20_patient=df_lfsb_20['patient_id'].nunique()
num_20_case=df_lfsb_20['case_no'].nunique()
print('分组patient人数',num_10_patient,num_15_patient,num_20_patient)
print('分组case记录',num_10_case,num_15_case,num_20_case)
#%% md
### 统计10mg组再次入院
#%%
#统计10mg组再次入院人数
count_10=0
list_10_again=[]
for i in np.unique(df_lfsb_10['patient_id']):
temp=df_lfsb_10[df_lfsb_10['patient_id']==i]
if temp.shape[0]>1:
count_10 +=1
list_10_again.append(i)
print('10mg再次入院人数',count_10,count_10/num_10_patient)
print(list_10_again)
#%% md
### 统计15mg组再次入院
#%%
# 统计15mg组再次入院人数
count_15=0
list_15_again=[]
for i in np.unique(df_lfsb_15['patient_id']):
temp=df_lfsb_15[df_lfsb_15['patient_id']==i]
temp=temp.reset_index(drop=True)
if temp.shape[0]>1:
count_15 +=1
list_15_again.append(i)
print('15mg再次入院人数',count_15,count_15/num_15_patient)
print(list_15_again)
#%% md
### 统计20mg组再次入院人数
#%%
# 统计20mg组再次入院人数
count_20=0
list_20_again=[]
for i in np.unique(df_lfsb_20['patient_id']):
temp=df_lfsb_20[df_lfsb_20['patient_id']==i]
temp=temp.reset_index(drop=True)
if temp.shape[0]>1:
count_20 +=1
list_20_again.append(i)
print('20mg再次入院人数',count_20,count_20/num_20_patient)
print(list_20_again)
#%% md
### 提取各组再次入院的记录
#%%
# 再次入院patient_id列表
list_again=list_10_again + list_15_again + list_20_again
print(type(list_again))
print(list_again)
df_lfsb_group_again=df_lfsb_group[df_lfsb_group['patient_id'].isin(list_again)]
df_lfsb_group_again=df_lfsb_group_again.reset_index(drop=True)
#%%
print(df_lfsb_group_again.shape)
print(df_lfsb_group_again['patient_id'].nunique())
print(df_lfsb_group_again['case_no'].nunique())
#%%
# 保存再次入院的分组数据
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_2.3_保存再次入院的分组数据.xlsx')
df_lfsb_group_again.to_excel(writer)
writer.save()
高低剂量组PSM数据
#%% md
## 提取部分基础特征,做PSM分析
#%%
# 提取部分基础特征,做PSM分析,一个患者对应一条数据
df_lfsb_group_PSM=df_lfsb_group_again[['patient_id','case_no','start_datetime','end_datetime','日剂量','gender','age','BMI','糖尿病','高血压','剂量分组']]
#%%
# 提取数据先排序和reset_index
df_lfsb_group_PSM=df_lfsb_group_PSM.sort_values(['patient_id','case_no','start_datetime'])
# 计算第二次入院记录的PSM。需要注意:一个患者第二次入院的的日剂量应该是第一次出院时的日剂量,而不是记录本身的第二次出院日剂量
temp_list=[]
for i in np.unique(df_lfsb_group_PSM['patient_id']):
temp=df_lfsb_group_PSM[df_lfsb_group_PSM['patient_id']==i]
temp=temp.reset_index(drop=True)
# 再次入院的日剂量为第一次出院时的日剂量
temp.loc[1,'日剂量']=temp.loc[0,'日剂量']
temp=temp.iloc[1:2,:]
temp_list.append(temp)
#%%
df_lfsb_group_PSM = temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_group_PSM=pd.concat([df_lfsb_group_PSM,temp_list[j]],axis=0)
df_lfsb_group_PSM=df_lfsb_group_PSM.reset_index(drop=True)
#%%
# 再次入院分组数据做PSM分析
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_2.4_再次入院分组数据做PSM分析.xlsx')
df_lfsb_group_PSM.to_excel(writer)
writer.save()
高低剂量组再入院统计
#%% md
## 并入出入院时间和诊断
#%%
df_inp_record
print(type(df_inp_record.loc[0,'case_no']))
#%%
print('-------------------------计算多次出入院时间-----------------------------')
temp_list=[]
for i in np.unique(df_lfsb_group_again['case_no']):
print(i)
# print(type(i))
temp=df_lfsb_group_again[df_lfsb_group_again['case_no']==i]
temp_inp_time=df_inp_record[df_inp_record['case_no']==i]
temp_inp_time=temp_inp_time.reset_index(drop=True)
# print(temp_inp_time)
# print(temp_inp_time.loc[0,'adm_date'])
temp_inp_diagnostic=df_diagnostic_inp_merge[df_diagnostic_inp_merge['case_no']==i]
temp_inp_diagnostic=temp_inp_diagnostic.reset_index(drop=True)
# print(temp_inp_diagnostic)
# 并入出入院时间
temp['adm_date']=temp_inp_time.loc[0,'adm_date']
temp['dis_date']=temp_inp_time.loc[0,'dis_date']
# print(temp)
# 并入入院诊断
temp['diagnostic_content']=temp_inp_diagnostic.loc[0,'diagnostic_content']
print(temp)
temp_list.append(temp)
#%%
df_lfsb_merge_inp_diagnostic=temp_list[0]
for j in range(1,len(temp_list)):
df_lfsb_merge_inp_diagnostic=pd.concat([df_lfsb_merge_inp_diagnostic,temp_list[j]])
df_lfsb_merge_inp_diagnostic=df_lfsb_merge_inp_diagnostic.sort_values(['patient_id','case_no','adm_date'])
df_lfsb_merge_inp_diagnostic=df_lfsb_merge_inp_diagnostic.reset_index(drop=True)
del temp_list
#%%
print(df_lfsb_merge_inp_diagnostic.shape)
print(df_lfsb_merge_inp_diagnostic['patient_id'].nunique())
print(df_lfsb_merge_inp_diagnostic['case_no'].nunique())
#%%
df_lfsb_merge_inp_diagnostic
#%%
# 保存并入出入院时间和诊断
writer=pd.ExcelWriter(project_path+'/data/processed_data/df_2.5_并入出入院时间和诊断.xlsx')
df_lfsb_merge_inp_diagnostic.to_excel(writer)
writer.save()
#%% md
## 统计再次入院的出血、卒中
#%%
# 按时间排序
df_lfsb_merge_inp_diagnostic=df_lfsb_merge_inp_diagnostic.sort_values(['patient_id','adm_date'])
#%%
# 判断两个列表中是否存在相同元素,存在返回True,否则False
def judge_list_element(list1,list2):
judge_list=[x for x in list1 if x if list2]
if judge_list:
return True
else:
return False
#%%
# 根据郑-随访诊断,统计再次入院的出血、卒中
# 卒中事件
stroke_event=['脑梗死','脑梗死后遗症','腔隙性脑梗死','大脑动脉栓塞引起的脑梗死','脑梗塞','中风','脑梗死个人史','脑干梗死(康复期)','多发性脑梗死',
'左心耳封堵术后','左心耳封堵术','左心房栓子形成','多发腔隙性脑梗死','左侧基底节区陈旧性腔隙性脑梗塞','心耳血栓','小脑梗死','短暂性脑缺血',
'陈旧性脑梗死','左心耳附壁血栓','脑栓塞','基底动脉血栓形成脑梗死','左心耳血栓形成','脑梗死(基底节大动脉粥样硬化性)','多发性脑梗塞','陈旧性脑梗塞',
'脑梗死(大脑中动脉心源性)','大脑动脉狭窄脑梗死','短暂性脑缺血发作','脑梗塞后遗症','右侧小脑半球陈旧性脑梗死','脑血管取栓术后','陈旧性腔隙性脑梗死',
'大脑动脉血栓形成引起的脑梗死','肾缺血和肾梗死','左侧大脑中动脉支架取栓术后','多发腔隙性脑梗塞','胸主动脉附壁血栓','起搏器血栓形成','左心耳切除术后',
'左侧颈内动脉血管内抽吸术后','左心耳血栓']
# 出血事件
bleeding_event=['脑梗死后出血转化','消化道出血','出血性脑梗死','失血性休克','出血性内痔','脑出血后遗症','胃溃疡伴有穿孔','血尿,持续性',
'脑内出血','下消化道出血','肺泡出血可能','蛛网膜下腔出血','女性盆腔血肿','皮下出血']
#%%
# 排序
df_lfsb_merge_inp_diagnostic=df_lfsb_merge_inp_diagnostic.sort_values(['patient_id','case_no','adm_date'])
df_lfsb_merge_inp_diagnostic=df_lfsb_merge_inp_diagnostic.reset_index(drop=True)
# 第再次入院新出血卒中统计
group_0_num=0
group_1_num=0
group_2_num=0
# 患者id
for j in np.unique(df_lfsb_merge_inp_diagnostic['patient_id']):
# print(type(j))
# 患者的住院记录case_no
temp=df_lfsb_merge_inp_diagnostic[df_lfsb_merge_inp_diagnostic['patient_id']==j]
temp=temp.reset_index(drop=True)
for k in range(temp.shape[0]):
temp_diagnostic_list=str(temp.loc[k,'diagnostic_content']).split(';')
# 如果第一次出院,存在出血卒中事件,则跳过;
if k==0:
if judge_list_element(temp_diagnostic_list,stroke_event) or judge_list_element(temp_diagnostic_list,bleeding_event):
# if j==7664380:
# print('看错了吧')
break
# 否则,统计再次入院的出血卒中事件
if judge_list_element(temp_diagnostic_list,stroke_event) or judge_list_element(temp_diagnostic_list,bleeding_event):
group_id=temp.loc[(k-1),'剂量分组']
if group_id ==0:
group_0_num +=1
elif group_id ==1:
group_1_num +=1
elif group_id ==2:
group_2_num +=1
break
#%%
print('10mg组再次入院的新出血卒中率:',group_0_num, group_0_num/count_10)
print('15mg组再次入院的新出血卒中率:',group_1_num, group_1_num/count_15)
print('20mg组再次入院的新出血卒中率:',group_2_num, group_2_num/count_20)
|