Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering

Rahman S. M. Wahidur, Ishmam Tashdeed, Manjit Kaur,Heung-No Lee

IEEE ACCESS(2024)

引用 0|浏览0
暂无评分
摘要
Blockchain technology has revolutionized the financial landscape, witnessing widespread adoption of cryptocurrencies due to their decentralized and transparent nature. As sentiments expressed on social media platforms wield substantial influence over cryptocurrency market dynamics, sentiment analysis has emerged as a crucial tool for gauging public opinion and predicting market trends. This paper explores fine-tuning techniques for large language models to enhance sentiment analysis performance. Experimental results demonstrate a significant average zero-shot performance gain of 40% on unseen tasks after fine-tuning, highlighting its potential. Additionally, the impact of instruction-based fine-tuning on models of varying scales is examined, revealing that larger models benefit from instruction tuning, achieving the highest average accuracy score of 75.16%. In contrast, smaller-scale models may experience reduced generalization due to complete model capacity utilization. To gain deeper insight into instruction effectiveness, the paper presents experimental investigations under different instruction tuning setups. Results show the model achieves an average accuracy score of 72.38% for short and simple instructions, outperforming long and complex instructions by over 12%. Finally, the paper explores the relationship between fine-tuning corpus size and model performance, identifying an optimal corpus size of 6,000 data points for the highest performance across different models. Microsoft's MiniLM, a distilled version of BERT, excels in efficient data use and performance optimization, while Google's FLAN-T5 demonstrates consistent and reliable performance across diverse datasets.
更多
查看译文
关键词
Cryptocurrency,Social networking (online),Analytical models,Training,Context modeling,Sentiment analysis,Transformers,Zero-shot learning,Supervised learning,in-context learning,supervised fine-tuning,instruction tuned,prompt engineering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要