Bystander Experiences of Domestic Violence and Abuse During the COVID-19 Pandemic
JOURNAL OF GENDER-BASED VIOLENCE(2024)
Publ Hlth Wales | Univ Exeter | Univ Durham | Liverpool John Moores Univ
Abstract
This article seeks to understand the experiences of bystanders to domestic violence and abuse (DVA) during the COVID-19 pandemic in Wales. Globally, professionals voiced concern over the COVID-19 restrictions exacerbating conditions for DVA to occur. Yet evidence suggests this also increased opportunities for bystanders to become aware of DVA and take action against it. This mixed methods study consists of a quantitative online survey and follow-up interviews with survey respondents. Conducted in Wales, UK, during a national lockdown in 2021, this article reports on the experiences of 186 bystanders to DVA during the pandemic. Results suggest that bystanders had increased opportunity to become aware of DVA due to the pandemic restrictions. Results support the bystander situational model whereby respondents have to become aware of the behaviour, recognise it as a problem, feel that they possess the correct skills, and have confidence in their skills, before they will take action. Having received bystander training was a significant predictor variable in bystanders taking action against DVA; this is an important finding that should be utilised to upskill general members of the community.
MoreTranslated text
Key words
domestic violence and abuse,VAWDASV,COVID-19,pandemic,bystander
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2010
被引用295 | 浏览
2008
被引用358 | 浏览
2011
被引用274 | 浏览
2013
被引用188 | 浏览
2016
被引用78 | 浏览
2015
被引用45 | 浏览
2020
被引用21 | 浏览
2018
被引用35 | 浏览
2017
被引用45 | 浏览
2019
被引用40 | 浏览
2018
被引用96 | 浏览
2019
被引用13 | 浏览
2020
被引用21 | 浏览
2020
被引用304 | 浏览
2020
被引用86 | 浏览
2020
被引用622 | 浏览
2015
被引用115 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话