The Estimation and Modeling of Cause-specific Cumulative Incidence Functions Using Time-dependent Weights
The Stata Journal Promoting communications on statistics and Stata(2017)
Abstract
Competing risks occur in survival analysis when an individual is at risk of more than one type of event and one event's occurrence precludes another's. The cause-specific cumulative incidence function (CIF) is a measure of interest with competing-risks data. It gives the absolute (or crude) risk of having the event by time t, accounting for the fact that it is impossible to have the event if a competing event occurs first. The user-written command stcompet calculates nonparametric estimates of the cause-specific CIF, and the official Stata command stcrreg fits the Fine and Gray (1999, Journal of the American Statistical Association 94: 496–509) model for competing-risks data. Geskus (2011, Biometrics 67: 39–49) has recently shown that standard software can estimate some of the key measures in competing risks by restructuring the data and incorporating weights. This has a number of advantages because any tools developed for standard survival analysis can then be used to analyze competing-risks data. In this article, I describe the stcrprep command, which restructures the data and calculates the appropriate weights. After one uses stcrprep, a number of standard Stata survival analysis commands can then be used to analyze competing risks. For example, sts graph, failure will give a plot of the cause-specific CIF, and stcox will fit the Fine and Gray (1999) proportional subhazards model. Using stcrprep together with stcox is computationally much more efficient than using stcrreg. In addition, stcrprep opens up new opportunities for competing-risk models. I illustrate this by fitting flexible parametric survival models to the expanded data to directly model the cause-specific CIF.
MoreTranslated text
Key words
st0471,stcrprep,survival analysis,competing risks,time-dependent effects
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2010
被引用220 | 浏览
1988
被引用5326 | 浏览
2012
被引用105 | 浏览
2013
被引用154 | 浏览
2005
被引用328 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper