Chrome Extension
WeChat Mini Program
Use on ChatGLM

Suppressing Phagocyte Activation by Overexpressing the Phosphatidylserine Lipase ABHD12 Preserves Sarmopathic Nerves.

bioRxiv the preprint server for biology(2024)

Department of Genetics | Department of Developmental Biology

Cited 0|Views0
Abstract
Programmed axon degeneration (AxD) is a key feature of many neurodegenerative diseases. In healthy axons, the axon survival factor NMNAT2 inhibits SARM1, the central executioner of AxD, preventing it from initiating the rapid local NAD+ depletion and metabolic catastrophe that precipitates axon destruction. Because these components of the AxD pathway act within neurons, it was also assumed that the timetable of AxD was set strictly by a cell-intrinsic mechanism independent of neuron-extrinsic processes later activated by axon fragmentation. However, using a rare human disease model of neuropathy caused by hypomorphic NMNAT2 mutations and chronic SARM1 activation (sarmopathy), we demonstrated that neuronal SARM1 can initiate macrophage-mediated axon elimination long before stressed-but-viable axons would otherwise succumb to cell-intrinsic metabolic failure. Investigating potential SARM1-dependent signals that mediate macrophage recognition and/or engulfment of stressed-but-viable axons, we found that chronic SARM1 activation triggers axonal blebbing and dysregulation of phosphatidylserine (PS), a potent phagocyte immunomodulatory molecule. Neuronal expression of the phosphatidylserine lipase ABDH12 suppresses nerve macrophage activation, preserves motor axon integrity, and rescues motor function in this chronic sarmopathy model. We conclude that PS dysregulation is an early SARM1-dependent axonal stress signal, and that blockade of phagocytic recognition and engulfment of stressed-but-viable axons could be an attractive therapeutic target for management of neurological disorders involving SARM1 activation.
More
Translated text
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest