1.Wang Y, Ke P, Zheng Y, et al. A large-scale chinese short-text conversation dataset[C]//CCF International Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2020: 91-103.
2.Zhou H, Ke P, Zhang Z, et al. Eva: An open-domain chinese dialogue system with large-scale generative pre-training[J]. arXiv preprint arXiv:2108.01547, 2021.
3.Gu Y, Wen J, Sun H, et al. Eva2. 0: Investigating open-domain chinese dialogue systems with large-scale pre-training[J]. arXiv preprint arXiv:2203.09313, 2022.
4.Roller S, Dinan E, Goyal N, et al. Recipes for Building an Open-Domain Chatbot[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 2021: 300-325.
5.Shuster K, Xu J, Komeili M, et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage[J]. arXiv preprint arXiv:2208.03188, 2022.
6.Bao S, He H, Wang F, et al. PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning[C]//Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 2021: 2513-2525.
7.Mi F, Li Y, Zeng Y, et al. PANGUBOT: Efficient Generative Dialogue Pre-training from Pre-trained Language Model[J]. arXiv preprint arXiv:2203.17090, 2022.
8.Dong L, Yang N, Wang W, et al. Unified language model pre-training for natural language understanding and generation[J]. Advances in Neural Information Processing Systems, 2019, 32.
9.Gu Y, Han X, Liu Z, et al. PPT: Pre-trained Prompt Tuning for Few-shot Learning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 8410-8423.
10.Lester B, Al-Rfou R, Constant N. The Power of Scale for Parameter-Efficient Prompt Tuning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021: 3045-3059.
11.Ju D, Xu J, Boureau Y L, et al. Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls[J]. arXiv preprint arXiv:2208.03295, 2022.
12.Bai Y, Jones A, Ndousse K, et al. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback[J]. arXiv preprint arXiv:2204.05862, 2022.
13.Liu S, Zheng C, Demasi O, et al. Towards Emotional Support Dialog Systems[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021: 3469-3483.
14.Thoppilan R, De Freitas D, Hall J, et al. Lamda: Language models for dialog applications[J]. arXiv preprint arXiv:2201.08239, 2022.
本文总结了阿里妈妈数据技术专家陈亮在Flink Forward Asia 2024大会上的分享,围绕广告业务背景、架构设计及湖仓方案演进展开。内容涵盖广告生态运作、实时数仓挑战与优化,以及基于Paimon的湖仓方案优势。通过分层设计与技术优化,实现业务交付周期缩短30%以上,资源开销降低40%,并大幅提升系统稳定性和运营效率。文章还介绍了阿里云实时计算Flink版的免费试用活动,助力企业探索实时计算与湖仓一体化解决方案。