See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.
Ammari, T. et al. (2019) ‘Music, Search, and IoT: How People (Really) Use Voice Assistants’, ACM Transactions on Computer-Human Interaction, 26(3), pp. 1–28. Available at: https://doi.org/10.1145/3311956.
Baidoo-Anu, D. and Owusu Ansah, L. (2023) ‘Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning’, SSRN Electronic Journal [Preprint]. Available at: https://doi.org/10.2139/ssrn.4337484.
Beldad, A. et al. (2012) ‘A cue or two and I’ll trust you: Determinants of trust in government organizations in terms of their processing and usage of citizens’ personal information disclosed online’, Government Information Quarterly, 29(1), pp. 41–49. Available at: https://doi.org/10.1016/j.giq.2011.05.003.
Blankenburg, J. (2018) Things Every Alexa Skill Should Do: Pass the One-Breath Test, Alexa Blogs. Available at: https://developer.amazon.com/blogs/alexa/post/531ffdd7-acf3-43ca-9831-9c375b08afe0/things-every-alexa-skill-should-do-pass-the-one-breath-test (Accessed: 4 June 2023).
Bowman, E. (2022) ‘A new AI chatbot might do your homework for you. But it’s still not an A+ student’, NPR, 19 December. Available at: https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homework-academia (Accessed: 9 May 2023).
Branham, S.M. and Mukkath Roy, A.R. (2019) ‘Reading Between the Guidelines: How Commercial Voice Assistant Guidelines Hinder Accessibility for Blind Users’, in Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. New York, NY, USA: Association for Computing Machinery (ASSETS ’19), pp. 446–458. Available at: https://doi.org/10.1145/3308561.3353797.
Braun, V. and Clarke, V. (2006) ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), pp. 77–101. Available at: https://doi.org/10.1191/1478088706qp063oa.
Castillo, C., Mendoza, M. and Poblete, B. (2011) ‘Information credibility on twitter’, in Proceedings of the 20th international conference on World wide web. New York, NY, USA: Association for Computing Machinery (WWW ’11), pp. 675–684. Available at: https://doi.org/10.1145/1963405.1963500.
Clark, L. et al. (2019) ‘What Makes a Good Conversation? Challenges in Designing Truly Conversational Agents’, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery (CHI ’19), pp. 1–12. Available at: https://doi.org/10.1145/3290605.3300705.
Cooper, G. (2023) ‘Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence’, Journal of Science Education and Technology, 32(3), pp. 444–452. Available at: https://doi.org/10.1007/s10956-023-10039-y.
Cowan, B.R. et al. (2016) ‘Towards Understanding How Speech Output Affects Navigation System Credibility’, in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery (CHI EA ’16), pp. 2805–2812. Available at: https://doi.org/10.1145/2851581.2892469.
Faul, F. et al. (2007) ‘G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences’, Behavior Research Methods, 39(2), pp. 175–191. Available at: https://doi.org/10.3758/BF03193146.
Foulkes, P., Scobbie, J.M. and Watt, D. (2010) ‘Sociophonetics’, in The Handbook of Phonetic Sciences. John Wiley & Sons, Ltd, pp. 703–754. Available at: https://doi.org/10.1002/9781444317251.ch19.
Gleason, N. (2022) ChatGPT and the rise of AI writers: how should higher education respond?, Times Higher Education. Available at: https://www.timeshighereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-education-respond (Accessed: 3 June 2023).
Hacker, P., Engel, A. and Mauer, M. (2023) ‘Regulating ChatGPT and other Large Generative AI Models’. Available at: http://arxiv.org/abs/2302.02337 (Accessed: 9 May 2023).
Hayes, N. (2000) Doing psychological research: gathering and analysing data. Buckingham: Open University Press. Available at: http://catdir.loc.gov/catdir/toc/mh051/00037515.html (Accessed: 4 June 2023).
Hensch, A.-C. et al. (2022) To trust or not to trust - Comparing two trust in automation scales when assessing an external HMI in automated vehicles.
Horst, M., Kuttschreuter, M. and Gutteling, J.M. (2007) ‘Perceived usefulness, personal experiences, risk perception and trust as determinants of adoption of e-government services in The Netherlands’, Computers in Human Behavior, 23(4), pp. 1838–1852. Available at: https://doi.org/10.1016/j.chb.2005.11.003.
Hu, K. and Hu, K. (2023) ‘ChatGPT sets record for fastest-growing user base - analyst note’, Reuters, 2 February. Available at: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (Accessed: 5 June 2023).
Introducing the new Bing (no date). Available at: https://www.bing.com/new (Accessed: 6 June 2023).
Kelton, K., Fleischmann, K.R. and Wallace, W.A. (2008) ‘Trust in digital information’, Journal of the American Society for Information Science and Technology, 59(3), pp. 363–374. Available at: https://doi.org/10.1002/asi.20722.
Körber, M. (2018) ‘Theoretical considerations and development of a questionnaire to measure trust in automation’, in.
Kulkarni, P. et al. (2019) ‘Conversational AI: An Overview of Methodologies, Applications & Future Scope’, in 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA). 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), pp. 1–7. Available at: https://doi.org/10.1109/ICCUBEA47591.2019.9129347.
Li, F. (2011) ‘A Holistic Framework for Trust in Online Transactions - Li - 2012 - International Journal of Management Reviews - Wiley Online Library’, Wiley Online Library [Preprint]. Available at: https://doi.org/10.1111/j.1468-2370.2011.00311.x.
Luccioni, A.S. and Viviano, J.D. (2021) ‘What’s in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus’. arXiv. Available at: https://doi.org/10.48550/arXiv.2105.02732.
Madhavan, P. and Wiegmann, D.A. (2007) ‘Similarities and differences between human–human and human–automation trust: an integrative review’, Theoretical Issues in Ergonomics Science, 8(4), pp. 277–301. Available at: https://doi.org/10.1080/14639220500337708.
Mason, J. (2006) ‘Mixing methods in a qualitatively driven way’, Qualitative Research, 6(1), pp. 9–25. Available at: https://doi.org/10.1177/1468794106058866.
Murad, C. et al. (2018) ‘Design guidelines for hands-free speech interaction’, in Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. New York, NY, USA: Association for Computing Machinery (MobileHCI ’18), pp. 269–276. Available at: https://doi.org/10.1145/3236112.3236149.
Nadeem, M., Bethke, A. and Reddy, S. (2020) ‘StereoSet: Measuring stereotypical bias in pretrained language models’. arXiv. Available at: http://arxiv.org/abs/2004.09456 (Accessed: 3 June 2023).
Nass, C. and Moon, Y. (2000) ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues, 56(1), pp. 81–103. Available at: https://doi.org/10.1111/0022-4537.00153.
Niculescu, A. et al. (2008) ‘Impact of English regional accents on user acceptance of voice user interfaces’, in Proceedings of the 5th Nordic conference on Human-computer interaction: building bridges. New York, NY, USA: Association for Computing Machinery (NordiCHI ’08), pp. 523–526. Available at: https://doi.org/10.1145/1463160.1463235.
OpenAI API (2023). Available at: https://platform.openai.com (Accessed: 3 June 2023).
Reeves, B. and Nass, C. (1996) ‘The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla’, Bibliovault OAI Repository, the University of Chicago Press [Preprint].
Rheu, M. et al. (2021) ‘Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design’, International Journal of Human–Computer Interaction, 37(1), pp. 81–96. Available at: https://doi.org/10.1080/10447318.2020.1807710.
Roselli, D., Matthews, J. and Talagala, N. (2019) ‘Managing Bias in AI’, in Companion Proceedings of The 2019 World Wide Web Conference. New York, NY, USA: Association for Computing Machinery (WWW ’19), pp. 539–544. Available at: https://doi.org/10.1145/3308560.3317590.
Rowley, J. and Johnson, F. (2013) ‘Understanding trust formation in digital information sources: The case of Wikipedia’, Journal of Information Science, 39(4), pp. 494–508. Available at: https://doi.org/10.1177/0165551513477820.
Sandygulova, A. and O’Hare, G.M.P. (2015) ‘Children’s Responses to Genuine Child Synthesized Speech in Child-Robot Interaction’, in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts. New York, NY, USA: Association for Computing Machinery (HRI’15 Extended Abstracts), pp. 81–82. Available at: https://doi.org/10.1145/2701973.2702058.
Scharth, M. (2022) The ChatGPT chatbot is blowing people away with its writing skills, The University of Sydney. Available at: https://www.sydney.edu.au/news-opinion/news/2022/12/08/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skil.html (Accessed: 2 June 2023).
Seymour, W., Cote, M. and Such, J. (2022) ‘Can you meaningfully consent in eight seconds? Identifying Ethical Issues with Verbal Consent for Voice Assistants’, in Proceedings of the 4th Conference on Conversational User Interfaces. New York, NY, USA: Association for Computing Machinery (CUI ’22), pp. 1–4. Available at: https://doi.org/10.1145/3543829.3544521.
Stvilia, B., Mon, L. and Yi, Y.J. (2009) ‘A model for online consumer health information quality’, Journal of the American Society for Information Science and Technology, 60(9), pp. 1781–1791. Available at: https://doi.org/10.1002/asi.21115.
SUNDAR, S.S. and NASS, C. (2000) ‘Source Orientation in Human-Computer Interaction: Programmer, Networker, or Independent Social Actor’, Communication Research, 27(6), pp. 683–703. Available at: https://doi.org/10.1177/009365000027006001.
Sutton, S.J. et al. (2019) ‘Voice as a Design Material: Sociophonetic Inspired Design Strategies in Human-Computer Interaction’, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery (CHI ’19), pp. 1–14. Available at: https://doi.org/10.1145/3290605.3300833.
Tamagawa, R. et al. (2011) ‘The Effects of Synthesized Voice Accents on User Perceptions of Robots’, International Journal of Social Robotics, 3(3), pp. 253–262. Available at: https://doi.org/10.1007/s12369-011-0100-4.
Voicebot.ai and Business Wire (2020) Number of voice assistants in use worldwide 2019-2024, Statista. Available at: https://www.statista.com/statistics/973815/worldwide-digital-voice-assistant-in-use/ (Accessed: 24 May 2023).
‘Voiceflow’ (2023). Voiceflow, Inc. Available at: https://www.voiceflow.com/ (Accessed: 30 June 2023).
What are Alexa Skills? - Amazon Customer Service (no date). Available at: https://www.amazon.com/gp/help/customer/display.html?nodeId=GG3RZLAA3RH83JAA (Accessed: 4 June 2023).