Olaparib Combined with Entinostat Exerts Differential Effects on Tumor-Associated Macrophages in Tumors Compared to Ascites in Syngeneic HR-proficient Murine Models of Ovarian Cancer
CANCER RESEARCH(2023)
Washington University in St. Louis | University of Kansas Medical Center | Vanderbilt University Medical Center | Vanderbilt University
Abstract
Poly ADP ribose inhibitors (PARPi) are most effective in ovarian cancer tumors with homologous recombination (HR) deficiency. Our group has shown that histone deacetylase inhibitors (HDACi) sensitize HR proficient ovarian cancer cells to PARPi. Our current efforts are directed towards understanding how this therapeutic regimen alters tumor-associated macrophages (TAMs) in murine models of high grade serous ovarian cancer. To investigate the effects of Ola combined with Ent, we used HR-proficient ID8 P53 wild-type and ID8 P53−/− syngeneic murine models. Mice were randomized into 4 groups: control, Ola, Ent and Ola+Ent. Mice (Ent and Ola+Ent) were pre-treated with 15 mg/kg entinostat or vehicle (Ola and Ola+Ent) for one week via oral gavage and then treated with vehicle/Ent/Ola (100 mg/kg)/Ola+Ent for two weeks. Mice were sacrificed 24 h after the last dose to harvest tumors and ascites. Tumors were processed for histology to determine cell proliferation (Ki67) and immune cell markers (CCL2, M1 and M2-like macrophages). Ascites fluid was processed for flow cytometric analysis of immune cells. Tumors from parental ID8 P53 wild type mice showed significantly lower cell proliferation marker Ki67 (P<0.05), higher anti-tumorigenic M1-like CCL2 (P<0.05), and lower pro-tumorigenic M2-like mannose receptor (P<0.05) in Ent and Ola+Ent groups compared to vehicle and Ola. Ascites showed no significant change in anti-tumorigenic M1-like macrophages, but significantly increased pro-tumorigenic M2-like macrophages (P<0.005). ID8 P53−/− mice tumors showed significantly lower Ki-67 (P<0.05) in Ola+Ent group compared to vehicle and mono-treatments. Ascites showed no significant change in total macrophage or pro-tumorigenic M2-like macrophages, but a significant decrease in anti-tumorigenic M1-like macrophages in Ent and Ola+Ent groups compared to vehicle. To summarize, in HR proficient ID8 P53 wildtype and ID8 P53−/− syngeneic mouse models, Ola and Ent treatment exerted anti-tumorigenic effects in tumors but potentially pro-tumorigenic effects in ascites. In conclusion, concomitant targeting of tumor TAMs and ascites TAMs may be a therapeutic regimen to investigate in the future. Citation Format: Vijayalaxmi G. Gupta, Tyler Woodard, Simona Miceska, Bisiyao Fashemi, Sangappa Chadchan, Wendy Zhang, Katherine Roby, Andrew Wilson, Fiona Yull, Marta Crispens, Sumanta Naik, Asya Smimov, Christina Stallings, Dineo Khabele. Olaparib combined with entinostat exerts differential effects on tumor-associated macrophages in tumors compared to ascites in syngeneic HR-proficient murine models of ovarian cancer. [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2023; Part 1 (Regular and Invited Abstracts); 2023 Apr 14-19; Orlando, FL. Philadelphia (PA): AACR; Cancer Res 2023;83(7_Suppl):Abstract nr 3654.
MoreTranslated text
Key words
Olaparib
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest