Safety and Early Efficacy Results of Phase 1 Study of Affinity Tuned and Trackable AIC100 CAR T Cells in ICAM-1 Positive Relapsed And/or Refractory Advanced Poorly Differentiated and Anaplastic Thyroid Cancers.
Bone marrow transplantation(2023)SCI 3区
The University of Texas MD Anderson Cancer Center | DecImmune Therapeutics (United States) | Cornell University | Presbyterian Hospital | University Hospitals Seidman Cancer Center
Abstract
6095 Background: ICAM-1, a cell surface glycoprotein, is overexpressed in most ATC and PDTC. AIC100 is a 3rd-generation CAR T cell with micromolar affinity to ICAM-1, tuned lower than most CARs used to date in pre-/clinical studies. Affinity tuned AIC100 are expected to selectively bind and kill tumor cells while safely sparing healthy cells. AIC100 also co-expresses somatostatin receptor 2 (SSTR2), which enables in vivo monitoring of AIC100 distribution and expansion by DOTATATE PET/CT scan. Methods: The objectives of this phase 1 dose escalation study are to assess the safety, preliminary efficacy and to determine RP2D of AIC100 in patients with ICAM-1+ relapsed/refractory PDTC or ATC. 3 dose levels (DL) are being explored-DL1 at 1x 107, DL2 at 1 x 108 and DL3 at 5 x 108 AIC100. AIC100 is received as a single infusion two days after completing lymphodepletion with Fludarabine/Cyclophosphamide. FDG and DOTATATE PET/CT scans are used to assess response and to track AIC100 in vivo, respectively. Response is assessed by RECIST1.1, starting at end of treatment (day 42 post AIC100). AIC100 is manufactured using AffyImmune’s Tune and Track CAR T cell platform. Results: As of Feb 14, 2023, 6 pts (4 ATC; 2 PDTC) with a median age of 59.5 yrs (48-70) were infused with AIC100, 3 each in DL 1 and 2. AIC100 was successfully manufactured for all patients and all infusion products met target transduction efficiency. No serious adverse events or DLTs were reported. Two patients had grade 1 CRS. No ICANs was reported. In 3 evaluable patients in DL1, 1 patient had stable disease with decreased FDG activity in PET that correlated with increased activity in the DOATATE scan. From the 3 pts infused in the DL2, one is evaluable for efficacy at day 42. This is a patient with relapsed ATC who achieved PR with 42% reduction in target tumor lesion and remains in PR at 3 mths. The target lesion in this patient showed increased DOTATE avidity at d14 post AIC100 infusion. This was followed by decreased FDG and DOTATE avidity at d42 post AIC100 infusion, concomitant with decreased size, suggesting biological activity 42 days after CAR T infusion. Evaluation of CAR transgene demonstrated transient peripheral blood CAR T cell expansion. The second patient couldn’t be assessed due to early withdrawal for disease-related toxicity. The 3rd patient in DL2 remains in the DLT period, pending efficacy evaluation. Conclusions: AIC100 demonstrated an excellent safety profile in DL1/DL2 treated patients with ATC and PDTC with no DLTs observed. The objective and relatively durable partial response in the first evaluable patient in the DL2, a patient with metastatic ATC who failed multiple lines of therapy, is unprecedented and very encouraging. Further investigations of AIC100 are ongoing at DL2 and DL3 to determine the RP2D. Clinical trial information: NCT04420754 .
MoreTranslated text
Key words
Tumor Regression,CAR T Cells
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper